img
Michael Schuresko
(touchscreen version of this webpage for iPhone, etc)
News
Latest News
News
  • Left Standard Cognition, at Bluescape Inc
    • 2021-03-27T17:43:00.004-07:00
    • (link)
  • Back in the robotics world
    • I now work for Standard Cognition
    • 2019-06-25T20:18:00.003-07:00
    • (link)
  • SIAM paper
    • I had forgotten to post this earlier...
      Distributed Tree Rearrangements for Reachability and Robust Connectivity
      published in : SIAM Journal on Control and Optimization
    • 2012-10-11T20:50:00.003-07:00
    • (link)
  • Back in the graphics world
    • Leaving Google. Starting at oblong.
    • 2012-02-18T18:12:00.000-08:00
    • (link)
  • job switch / google
    • I now work for Google. Hooked was great fun, and I worked on interesting projects with wonderful people.
    • 2010-12-13T13:45:00.000-08:00
    • (link)
  • Employment
    • I have now joined the workforce : this Monday I will start work at Hooked Wireless
    • 2009-09-30T22:10:00.000-07:00
    • (link)
  • doctored
    • I officially have my PhD, my thesis is available at http://tintoretto.ucsd.edu/jorge/group/data/PhDThesis-MikeSchuresko-09.pdf
    • 2009-09-30T22:07:00.000-07:00
    • (link)
  • First post
    • The "news" section is now synced to my new blog.
    • 2009-09-30T19:37:00.000-07:00
    • (link)

For more news, see http://mikeschuresko.blogspot.com/search/label/news.
Current
What I'm currently studying

I graduated with a PhD from the Department of Applied Math and Statistics in the School of Engineering of the University of California, Santa Cruz. My research was with Prof. Jorge Cortés in the Mechanical and Aerospace Engineering department of University of California, San Diego. Our work was in the general area of robotic control and control theory. I desperately need to hand off maintenance of the web page for the controls research group at UCSC.

My research sits at the intersection of distributed algorithms and distributed control. Distributed control is the study of how multiple computational units, sensors and actuators can be networked together to control a single system, usually with an emphasis on requiring minimal communication. Distributed algorithms studies similar topics, but without sensing and actuation, and a greater emphasis on computation. I largely deal with problems featuring a swarm of robots communicating over a wireless network.

My thesis dealt with problems related to how to constrain motion of robots in a swarm so as not to lose global network connectivity of the swarm. I focussed on ways to do this which require local communication, with the desire of minimizing the amount of communication required.

Bio
A brief sketch of my academic and industrial experience

I started graduate school intending to study computer graphics, and still have a collection of class projects and flashy demosfrom those days. I chose my current advisor and research area after taking a class on non-linear control theory.

Prior to coming to UCSC, I worked for a few years as a programmer at Common Point Inc and the now-defunct Sense8 Corporation I was lucky enough to have the opportunity to write collision-detection systems professionally for one of these companies, and improve an existing collision-detection system for the other.

CMU Before that I completed my undergraduate degree in computer science (with a minor in mathematics) at Carnegie Mellon University. I still occasionally visit their puzzle page.

Previous intern-ships include work on graphics and GIS at TerraSim and working on graphics and robotics in support of machine learning and robot learning research at the Naval Center for Applied Research in Artificial Intelligence.

Publications
Papers and cited works
img Publications
Journal
Journal articles
  1. Distributed tree rearrangements for reachability and robust connectivity
    M Schuresko and Jorge Cortés
    SIAM Journal on Control and Optimization, submitted
    • See here, here, or here for a simulation and visualization in Java
  2. Distributed motion constraints for algebraic connectivity of robotic networks
    M Schuresko and Jorge Cortés
    Journal of Intelligent and Robotic Systems, accepted
    • See here for a simulation and visualization in Java
Conference
Short / Conference articles
  1. Distributed tree rearrangements for reachability and robust connectivity
    M Schuresko and Jorge Cortés
    Hybrid Systems: Computation and Control 2009,San Francisco, 2009
    • See here for a simulation and visualization in Java
  2. Distributed motion constraints for algebraic connectivity of robotic networks
    M Schuresko and Jorge Cortés
    Proceedings of the 47th IEEE Int. Conf. Decision and Control, Cancun, Mexico, 2008.
    • See here for a simulation and visualization in Java.
    • You can also see my slides from the talk I gave at CDC
  3. Safe graph rearrangements for distributed connectivity of robotic networks
    M Schuresko and Jorge Cortés
    Proceedings of the 46th IEEE Int. Conf. Decision and Control, New Orleans, Louisiana, USA, 2007, to appear.
    • See here for a simulation and visualization in Java
  4. Correctness analysis and optimality bounds of multi-spacecraft formation initialization algorithms
    M Schuresko and Jorge Cortés
    Proceedings of the 45th IEEE Int. Conf. Decision and Control, San Diego, California, USA, 2006.
    • A simulation platform for some of the algorithms described in the paper is available here.
Software
(software referenced in academic publications)
  1. cclsim : Control and Communications law SIMulator The java framework we use for our swarm simulations.
  2. A sampling of other software can be found elsewhere on this page
Misc_pubs
Other published work
  1. An undergraduate class project on single-view modeling done for a class on image-based modeling and rendering
    This project was cited in "A Two-Stage Approach for Interpreting Line Drawings of Curved Objects" presented at the EUROGRAPHICS Workshop on Sketch-Based Interfaces and Modeling (2004)
  2. Technical reports for 2007 and 2006.
Thesis
Phd Thesis
Controlling global network connectivity of robot swarms with local interactions
Department of Applied Mathematics and Statistics, University of California, Santa Cruz
Other
Demos, toys, fun, etc.
img
Demos
...from my days as a graphics student
  1. A class project for Scientific Visualization Seminar in which I developed a visualization of Hurricane Isabel.
  2. A project for a machine learning class on using reinforcement learning and neural networks to train a simulated robot
  3. A Neural Network Visualization I developed for my own amusement.
  4. An application of genetic algorithms to model-based vision.
  5. statistical volume rendering A small experiment trying to use volume rendering to visualize volumes of statistical distributions. Never really got off the ground. link
  6. forced couette flow Simulation and visualization showing the beginning of turbulence in the "plane Couette flow."
Fun
Hobbies/Toys/misc.
Fun
  1. Playing with audio samples (more available here )
  2. Composition with midi is a hobby of mine.
  3. Comics,comics,comics.
  4. My favorite internet radio station
Toys

  1. screenshot I have recently become a fan of Processing.org, and have made my first tiny demo using it.
  2. I wrote this little calculator thing as a sort of "Hello World" while learning JavaScript
  3. This piece of Python code makes thumbnail pages from directories of images. The results look like this (Requires PIL)
  4. A poorly-documented demonstration of engineering amplifier non-linearites to produce particular sets of overtones.

Misc

  1. If you know of any solid linear algebra packages for C, C++ or Java, I would be interested in hearing about them, please send me e-mail and I will collate the information here.
  2. Apparently "swarm robotics" is an emerging technology

blog
my "blog"
Blog
  • On licensing software engineers
    • In this post I will attempt to argue that, if one believes that there should be a licensing process for professional software engineers (and there are good arguments that there shouldn't be), that the licensing process for professional actuaries would serve as an excellent model to copy, and is superior to modeling a software engineering licensing process off of the so-called "Professional Engineer" (PE) license available to, e.g., civil, chemical, mechanical and electrical engineers.

      Every once in a while, particularly after a catastrophic software-induced failure (for instance, after the allegations of unintended acceleration on the part of the Toyota acceleration control module), I hear renewed calls for software engineers to be professionally licensed, to ensure that only competent and trustworthy people can practice in the field (or can practice particular roles in the field).

      What I hear people propose the most often is to add a "Software Engineer" exam to the set of exams one can take to become a licensed "Professional Engineer".

      This is a problematic idea for several reasons.

      First off, the standard PE track requires taking a "Fundamentals of Engineering" exam prior to taking the PE subject-matter exam. The FE exam is all good stuff to know, but it seems to cover the overlap between traditional fields of physical engineering. A computer science student or software engineer might know this material by coincidence, but current programs aren't necessarily designed to impart this knowledge.

      Second off, the standard PE certification strongly assumes that one has an engineering degree from an ABET-accredited engineering program, likely in the engineering school of a major university. Without even getting to the many software engineers who don't even major in CS, the ABET requirement, as it stands, would make it impossible for people who majored in CS at Carnegie Mellon or at, I believe, CalTech, to attain a PE license. Discussion of PE certification for Software Engineers have talked about backdoors and grandfather clauses for institutions like these -- but when the top programs in a discipline require a grandfather clause for the certification you're proposing, perhaps something is wrong with the certification.

      Third off, many of the best practical software engineers I know don't have degrees in computer science or computer engineering. Many have degrees in other STEM fields, and some have no degree at all, yet are far more practically competent than large numbers of people who hold CS degrees.

      Luckily there are professional domains other than engineering that have long-established professional licensing systems. I propose that a better one to copy would be the system used to license "actuaries".

      Actuaries work in industries like finance, insurance, etc. Their role involves measuring and managing risk and uncertainty. To an outsider like myself their field looks like a highly specialized subfield of applied statistics.

      And they have a licensing system.

      Best of all, they have a licensing system that, in the US, does not require that your degree be in any particular field (source : https://www.howtobecome.com/how-to-become-an-actuary ). Yet, at the same time, their licensing system is rigorous enough that those actuaries who did not major in "actuarial science" often have undergraduate degrees in mathematics.

      And, like computing, their discipline is considered to be a quantitative field, which, arguably, is a better analogy for how to treat professional software development than "engineering"

      Their certification process proceeds in steps and involves a mixture of exams and apprenticeship. Exams include topics like markov chains, survival models, generalized linear models, etc. Apprenticeship, I presume, entails working (for pay?) under a certified actuary.

      Of course, I, personally, prefer the current system of "no certification process whatsoever". But, if you're going to make a certification process for software developers, I think the actuarial model is a better model for most day-to-day software engineering than the PE model.

      I *could*, however, imagine a PE license for something like "controls" or "cyber-physical systems", and a world in which someone in charge of the Toyota acceleration control module would have to be dual-certified in that and in a non-PE software development certification modeled after actuarial certification. In such a world the person who is in charge of writing your banking software might have to hold some currently non-existent software development certification, but would not need a PE certification to perform what is, acknowledgedly, a critical function, but not which is also not a task that requires training in physical engineering.

    • 2019-06-25T20:49:00.000-07:00
    • (link)
  • No more IR remotes.
    • At work I frequently program computers attached to banks of 9 or 12 HDMI screens, sometimes programming clusters of such computers to drive even larger display walls. At home, I only have one HDMI-compatible screen, but the array of small, cheap devices attached to it is growing by the minute. In each setting, the fact that these screens are designed to be controlled and configured via IR remotes is causing me a headache.

      At work my IR-remote headache is that doing any sort of action to the screens (turning them on, turning them off, adjusting them) requires pointing the remote in such a way as to control one of them, but none of the identically-manufactured neighbors. For all I know, maybe there's a way to pair each one with a separate remote -- but then one would have to remember which remote went to which screen, and I doubt any such pairing would allow 45-way uniqueness. Our "solution" has been to try to avoid ever needing the remotes, and to stick a paper cup around the IR emitter so that the IR remote can be shielded from all but one screen at a time.

      At home, I only have one screen I want to control, but to use it with any device I need to first find the thing I use to control the device (possibly a laptop or a smartphone) *and* the remote for the TV. Mostly I only need to turn the thing on/off, change the volume, and switch HDMI inputs -- but it'd be nice if I didn't have a separate remote I always lose for those actions

      I can't be the only one facing these problems. Consumers seem likely to plug an ever-increasing array of strange devices into their home television sets, not all of which even come with IR remotes, while my belief that landscape of screens in the office environment will change radically enough to make this useful in the workplace is... ...something that I am implicitly gambling on in the "startup Lotto".

      And it turns out the HDMI people thought of this already. So there is already most of a solution out there, called "HDMI-CEC" for "HDMI Consumer Electronics Control" (see Wikipedia's page on the topic). But, after fiddling with it, I cannot yet get it to do everything I want.

      First off, I can't send HDMI-CEC signals over any HDMI connection from a discrete GPU from a (particular manufacturer of high-end graphics cards whose name starts with "nv"). The manufacturer, apparently, believes that HDMI-CEC is a consumer feature, and is not something their high-end cards should support. I beg to differ. I write software for walls of between 30 and 45 monitors and would like to be able to programmatically turn these monitors on, and apply things like "gamma correction", "color matching", "color temperature" and other settings. I would also like to do this on a per-program basis, so that one program driven by one set of designers/artists can have one set of monitor settings, and another program, designed by a different set of people, can have a different set of monitor settings. I'd also like to be able to individually address monitor settings for such a bank of monitors so that we can write our own software for calibrating them. Or, hey, so that some third-party can write software for calibrating large banks of monitors, and for storing the monitor settings in a way that allows us to programmatically re-apply them at any time. So, if anyone reading this works at a major graphics card manufacturer (particularly one whose HDMI outputs on their discrete GPUs don't support things like HDMI-CEC), please pass this along.

      Second off, it is not clear to me whether HDMI-CEC is actually capable of changing all the settings I want to be able to change. Libraries exist that allow me to send commands for on/off, input selection and volume (although the volume for some TVs seems to only work if there is an external audio amplifier, which baffles me). But there seems to be a lower level of communication available which I have not fully sat down and understood. I don't know what this lower layer is capable of, I see some menu-related messages, but it is not clear what I can do with them. I suspect I should be able to record the sequence of menu actions required to apply a particular setting on a particular manufacturer, and play these back blindly, which means that it might be possible, albeit awkward, to be able to control any monitor setting reachable via the menu system normally accessed with the screen's IR remote. But it'd be nicest to have device-independent ways to set standard pieces of monitor configuration (color temperature, gamma value, audio volume) in a way that actually works, input selection in a way that I can figure out using the HDMI-CEC tools for the Raspberry Pi).

      Update

      It turns out that, if I'm willing to stick with nvidia hardware only, nvidia-smi and friends will do almost everything I need. And what it doesn't do, I don't need to do for the massive display arrays I deal with at work (I have no need to individually power monitors on and off there, audio goes over a separate channel, and I have no need for input switching)

      For home use, HDMI-CEC remains just barely inadequate -- but, hey, maybe buying a newer TV (and an external audio system) would fix the problem.

    • 2016-02-14T09:45:00.001-08:00
    • (link)
  • Folding bicycles, wheel size, stability
    • One of the problems with public transportation in many American cities is the problem of how to get the last 2 miles from where your train / express bus / CalTrain stops to where you're actually trying to go (equivalently, how to get from your starting position to your train/bus stop). In some cities the solution is to take a different bus/train line and transfer : but in Los Angeles County and in Silicon Valley this is often untenable, and can turn a 20-minute ride on an express bus into a multi-hour transit saga.

      A frequently proposed solution is to take one's bicycle on the train/bus. This works somewhat well for trains, but busses (I live close to an express bus line) in Los Angeles County tend to have only two slots on the front of the bus for carrying bicycles. If those are full when you catch the bus, you're out of luck. At certain times of day, even trains will restrict the number of full-sized bicycles that can be brought on board.

      One way around this is to use a folding bicycle, such as the one pictured above. Conceptually it is great, and it makes a bike-and-bus commute feasible for me that would otherwise be difficult or unreliable. But it has issues.

      On the one hand, it is difficult to fit under the seat. The Metro web page states Folding bikes with 20 inch or smaller wheels can be taken on board. Make sure your bike is folded and stored under a rear seat so as not to block aisles and doorways. . This is difficult and awkward. The size of the bicycle when folded (as can be seen from the pictures above) is dominated by the size of the wheels.

      On the other hand, the stability of the bicycle (and its ability to handle rough terrain) affect its ability to go fast. Already riding my 20-inch-wheeled folding bike feels sluggish and unstable compared to my full-sized bicycle

      So how might one try to go about making a folding bicycle more stable while reducing the size of its wheels? Here is a proposal.

      For lateral stability it turns out there has been some interesting recent work on what makes bicycles stable. This video shows a small-wheeled bicycle that rides stably, without the help of trail or angular inertia. Further details on this work can be found. The authors even mention folding bicycles in their excellent TED talk.

      For stability going over rough terrain, potholes and curbs : NASA has already faced the problem of making a lightweight vehicle that can be folded into a tight package and retain the ability to traverse large obstacles. Their solution for many of the Mars rovers was the rocker-bogie suspension system. It is not designed to go at high speeds, and may have neglected dynamic stability : but it may be a good start. Note that a "bicycle" designed with such a system might end up having far more than 2 wheels.

      Neither of these, by itself, constitutes a "solution" to the problem of making a stable folding bicycle with small wheels capable of riding over rough terrain. But hopefully it provides a direction. And maybe it'll bring us one step closer to Richard Register's desire for cities built around transit and bicycles.

      P.S. for graduate students, inventors and entrepreneurs in the United States it may not be completely implausible to get funding from DARPA to develop better folding bicycles

    • 2015-05-14T23:04:00.000-07:00
    • (link)
  • RSS / humans?
    • This is a post that I had meant to publish 2 or 3 years ago and had forgotten about. Having found it in my unpublished drafts, I decided that it was still interesting, even if the "demise of Google Reader" is no longer fresh news

      The original post follows

      There has been some noise, since the demise of Google Reader, about whether RSS is "dead." I think much of the discussion on the topic is somewhat missing the point : even if RSS feeds are not something that normal humans want to collect, curate, subscribe to, and aggregate, RSS is still a great interchange format for computer programs that collect, curate, subscribe to, aggregate, and repackage RSS feeds.

      Case in point:

      • http://alumni.soe.ucsc.edu/~mds/?Other=expand&blog=expand#blog
      • http://mikeschuresko.blogspot.com/feeds/posts/default

      A nice thing about using RSS for such a purpose is that the content emitters don't have to run the same software or the same systems or even be run by the same people as the computer program reading the RSS. Now, of course, not all content emitters want their content to be scraped, collected,curated and repackaged. But, for those that do, RSS (or Atom) provide ideal means of interchange. Note, also, that the example above is probably not anywhere near the most efficient or scalable way to repackage a feed from one source in order to re-display in another.

      I believe that this is a corollary to the idea that Twitter is the ideal medium for machines to broadcast short updates to one another and to humans in a medium that is authenticated, but not private.

    • 2015-04-05T20:15:00.002-07:00
    • (link)
  • Homebrew solar battery charger
    • A neighbor of mine recently build a solar-powered battery charger out of a variety of parts, including an old computer monitor stand as the stand for the solar panel

      He was particularly proud of his energy efficiency, his use of predominantly analog components, even for tasks that seem akin to "logic", and the various clever tricks he used to achieve zero quiescent current

      He admitted that a more digital (and, particularly, microprocessor-based) design would have been easier and more flexible, but contended that the level of efficiency he achieved would have been impossible to reach with anything other than analog design

      His device had a serious of modular adapters for different levels of power, including USB, Sony laptop, an extra laptop battery, and a household-voltage AC adapter (capable of running a fan

      Unfortunately he didn't have circuit diagrams to share, having designed most of the thing primarily in his head, and directly in circuitry.

    • 2015-04-05T20:10:00.000-07:00
    • (link)
  • AC / DC
    • I recently had a conversation with a friend, who is interested in third world economic development, "leapfrogging" technologies, and energy, about AC vs. DC for electrical power transmission and distribution. He asked me for my further thoughts on the matter, so I will put them here, along with some caveats about what things I know that I don't know. I invite readers who know more about the topic to leave corrections and additions in the comments to this post.

      A bit of history

      The debate about whether to transmit power as AC or DC raged in the early part of the 20th century with Thomas Edison advocating for DC transmission of electricity, and Tesla advocating for AC. At the time, AC was adequate for most household applications (many of which involved the use of electrical current to produce heat, or to produce lighting through heat, and thus didn't care about the direction in which current travelled at any given time) and was vastly easier to efficiently step up or down in voltage, allowing for efficient transmission of power over long distances. One exception was that it was easier to build a variable-speed electric motors to run on DC (it is trivial to build single-speed AC motors, especially if one has 3-phase AC power, which is generally what is transmitted over longer distances, and what is supplied to industrial facilities). Older elevator technologies (circa early 1900s) tended to use DC motors, and tended to be installed in dense urban cores, where a large number of electricity customers could be served without the need for long-distance transmission. For these reasons, small DC power grids existed in many of America's large cities for decades after AC otherwise won what was known as the "war of the currents". See, for instance

      • http://www.coned.com/newsroom/news/pr20071115.asp
      • http://spectrum.ieee.org/energy/the-smarter-grid/san-franciscos-secret-dc-grid
      • http://cityroom.blogs.nytimes.com/2007/11/14/off-goes-the-power-current-started-by-thomas-edison/?_php=true&_type=blogs&_r=0

      To make a long story short, one generally wants low voltages, and the capability of producing high currents, at the place where the electrical energy is used. But it is far more efficient to transmit electrical power at high voltage (and comparatively low current). AC is fairly easy to step up and down in voltage using transformers. It has not, historically, been easy to do the same thing with DC power.

      Advantages of DC in the modern world

      One drawback to AC power stems form the fact that modern power grids are extremely interconnected. Placing two or more AC power sources on the same network requires that the sources be synchronized. This requires some form of dynamic control. A simple thought experiment should reveal why two out-of-phase AC sources wired to one another effectively create a short. Dealing with these synchronization issues has led to some fascinating control theory papers, but I could understand why practical engineers might want to dispense with these problems altogether and transmit electrical power via DC current. I don't know how much more complicated AC synchronization problems become when individual consumers are allowed to produce their own power and feed it back into the grid.

      As an extra piece of terminology (and one which might be very important to understanding other discussions of the electrical grid), power engineers divide the grid, conceptually, into two distinct sorts of networks.

      1. Transmission networks carry high-voltage electricity over long distances and interconnect different power plants on the grid. It is my impression that AC synchronization problems generally occur in the transmission network portion of the power grid.
      2. Distribution networks carry lower-voltage electricity from the transmission networks to end users, either industrial, commercial or home consumers. It is my impression that there is generally only one logical path for power to take between any two points on the distribution network (with the note that a "path" in this case might be 2, 3 or even 4 wires, depending on the form of the AC current).

      Further, modern appliances are quite different from those of the days of Edison and Tesla. Computers and other digital equipment generally require DC power sources (the "power supply" of your desktop computer contains a transformer, a voltage rectifier to convert AC to DC and a voltage regulator to maintains the voltage of the supply at a steady level, while your laptop likely has the same equipment mostly embedded in its power chord). LED lighting works with DC current (any LED light that you can screw in to your conventional lighting fixtures must come with, at bare minimum, a voltage rectifier to convert the AC of your light fixture into DC for the LEDs). Today's TVs, being digital appliances, require DC power internally.

      DC is of particular appeal to off-grid power systems for a variety of reasons. For one thing, solar cells naturally produce DC current. Most conventional electrical power generation (coal, oil, nuclear and the sorts of large solar installations that use mirrors to heat water) at some point heats water to turn a turbine which turns a generator. It is fairly easy to design such a system to produce 3-phrase AC power. I am not sure how difficult or easy it is to make such a system generate DC without using a rectifier. Equally important, perhaps even more important, to off-grid systems is that DC is required to charge backup batteries, and is the natural output of chemical batteries. An off-grid power system using solar for energy generation and batteries for storage will naturally want to be DC. I note here that one should be wary about using off-grid solutions with battery backup to leapfrog economic development in the developing world : the most economical rechargeable battery solution at this point is still lead-acid batteries, which come with a host of problems. I will try to see if I can dig up resources on it. One might want to ask whether solar generation with battery backup becomes more feasible when all of one's appliances become much lower power (lighting could be LED-driven, heating is less of an issue in parts of the developing world, computing and communications are becoming more efficient everyday. I still wouldn't want to run my washing machine off of batteries though). Battery technology is also rapidly improving, driven by consumer demand for things like smartphones and laptops with long battery life.

      A revolution in transmission technology

      One of the main advantages of AC current for electrical distribution, as mentioned above, is the ease of stepping voltages up and down, to allow transmission to occur at very high voltages, while giving end-users safe and convenient low-voltage electrical energy with high current capacity. Safety aside, giving high-voltage to end users would be infeasible for a variety of basic electrical reasons. Common materials, such as air, behave differently under high voltage and would need extra considerations.

      But, because of the problem of AC generator synchronization, utilities have found it to be desirable to have their large high-voltage interconnects run on DC power, which is much easier to synchronize and coordinate.

      Thankfully the technology to convert high-voltage AC (HVAC) to high-voltage DC (HVDC) and to step DC voltages up and down have improved radically during the semiconductor revolution, as technologies originally designed for lower-power applications have found their way into the world of power electronics. A good summary of the state of things is provided by Wikipedia : see http://en.wikipedia.org/wiki/High-voltage_direct_current. Photographs of some of these new pieces of equipment are spectacular in their scale and design, see http://en.wikipedia.org/wiki/File:Pole_2_Thyristor_Valve.jpg . One of the more fascinating pieces of high-voltage DC interconnect technology is the proposed Tres Amigas Superstation in Texas which plans to use superconducting wires to transmit DC current to connect the three major energy grids in the US.

      Summary

      Advantages of AC current

      • Ease of stepping up/down voltage (for efficiency in transmission)
      • Ease of making a single-speed motor (for instance, your coffee grinder)
      • AC is the natural output of the sort of electrical generator I would design were I to design a generator
      • Adequate for heating applications

      Advantages of DC current

      • Avoids the AC synchronization problem
      • Good for variable-speed motors (anything from the motor on the Honda insight, to the stepper motor in your hard drive, to wheelchair motors, and I think even washing machine motors).
      • What batteries want to be charged with
      • What batteries output
      • What solar cells output
      • What computers and digital electronics want to work with

      Technologies to watch if one is interested in these issues

      • High-voltage rectification
      • High-voltage DC - DC step-up / step-down
      • Battery technology
      • Socio-economic situations that might produce micro-grids
      • Technology that allows households to accomplish basic tasks using less power (I suspect there is little to no room for improvement in this area for things like electric stoves and electric heaters, but quite a bit of recent progress for communications, computing, lighting and entertainment. I am curious as to basic things like "can one make a significantly lower-power automatic washing machine")

      One thing I have not given much thought to is which form of power is easiest for people with little electrical knowledge to effectively deploy in micro-grids, and what sorts of technologies could change this.

    • 2014-06-28T13:20:00.000-07:00
    • (link)
  • Gray Codes
    • Recently I was thinking about Gray codes (http://en.wikipedia.org/wiki/Gray_code).

      Gray codes are, effectively, a way to count from 0 to (2n-1) on an n-bit counter while only flipping 1 bit at a time. Most simply, they are mappings of the form f : ℤ(2n) → ℤ(2n) such that for an k ∈ ℤ(2n), f(k) and f(k+1) differ by exactly 1 bit.

      These have a variety of uses, for instance, in robotics one can make an n-bit encoder wheel such that a smooth rotation of the wheel only changes one bit at a time, avoiding ambiguities when multiple detectors change their state at the same angle and give wildly inconclusive readings : see https://www.google.com/search?q=gray+code+encoder+wheel&safe=off&source=lnms&tbm=isch.
      More examples can be found on wikipedia
      The context in which they have come up the most often in my life is when thinking about how to enumerate all possible 2n settings on an n-bit dip switch (http://en.wikipedia.org/wiki/DIP_switch). I don't like having to flip k switches to have to go from the setting corresponding to (2k-1) to 2k for every integer k ≤ n, but I would also like to be able to compute which bit to flip next without having to expend a lot of mental effort. I recently came up with a trick for this. I am likely not the first person to come up with this trick, but I couldn't find it written up anywhere else, so I am blogging about it


      Method for finding the next bitstring in a Gray Code

      To iterate through an n-bit Gray code do:
      Starting at 0, repeat the following steps

      1. On every even iteration (we number our iterations starting at 0), flip the rightmost bit of the current number to get the next number
      2. On every odd iteration, find the rightmost 1 bit (the rightmost bit that is set to 1) in the current number. Flip the bit to the left of that to get the next number. If there is no bit to the left (if the current number is 2(n-1)) flip the remaining 1 to get back to 0
      3. .

      If you get lost, simply count the number of 1s in the current number. If it is odd, you are on step 2. If it is even, you are on step 1.
      Reversing the order of the steps traverses the Gray code in reverse order.


      Background on "reflected binary Gray codes"

      One of the earliest examples of a Gray code is what seems to be called a "reflected binary Gray code".
      Here is how it works :
      We will construct it on n bits recursively as a sequence of 2n bitstrings each of n bits. The kth bitstring in the sequence will be the number mapped to by k.

      1. The reflected binary Gray code on 1 bit is just the sequence [0, 1].
      2. To get the reflected binary Gray code on n bits, compute the code on (n-1) bits, add a leading 0 to each bitstring, then compute the code on (n-1) bits in reverse order, adding a leading 1 to each bitstring. Concatenate the resulting sequences (putting the "reflected" (or reversed) sequence after the forward sequence)
      This yields a Gray code because : (we induct on the number of bits in the code)
      • Each n-bit number is included once in the n-bit code. Inductively the rightmost (n-1) bits of this number must occur in the code on (n-1) bits. Either a given number starts with a leading 1, putting it in the second half of the sequence, or it starts with a leading 0, putting it in the first half of the sequence.
      • Each two consecutive numbers in the n-bit code differ by one bit. Either their leading (leftmost) digits are the same, in which case they differ by 1 bit in their rightmost (n-1) bits (by induction), or their leading (leftmost) digits differ, in which case the bitstrings differ by 1 bit (by construction).

      It is well-known that translating from the kth bitstring in this form of Gray code to the integer k can be done by recognizing that the ith bit of the number k is equal to the xor (sum modulo 2) of the leftmost n-i bits (assuming the rightmost bit is bit 0) of the kth bitstring in the Gray Code. TODO : replicate proof(s) here : Proof via matching to above construction, and proof by showing this to be a bijection, and showing that succ only filps one bit at a time


      Proof that the algorithm presented here works

      Reversing the order of operations reverses the order of traversal

      Because each operation is just a "flip" (addition modulo 2) of a bit, each operation is it's own inverse. Since there is one unique "rightmost bit set to 1" in any number, repeating the two steps in reverse order will apply the inverses to the last sequence of steps required to get to the current number, which should reverse the iteration. ∎

      The algorithm described at the top of this post ("Method for finding the next bitstring in a Gray Code") replicates the "reflected binary Gray code" described above.

      Assume this is true for m-bit Gray codes for any m < n. Show for n by induction.

      • The first 2(n-1) steps are identical to the steps in the case for (n-1) bits. These match the first 2(n-1) steps of the reflected binary Gray code (by induction).
      • After this we are left with a single bit set to 1 in the next-to-leftmost bit. Since we've taken an odd number of steps, the next step is to flip the leftmost bit to 1. From here on, we are replicating the case for (n-1) bits, but in reverse (see above) and with the leftmost bit set to 1.

      Proof by XOR

      You can also prove this using the XOR fact stated in the section on reflected binary Gray codes. Incrementing a regular binary number involves either
      • If the rightmost bit is 0, flip it to 1. In the Graycode version, this corresponds to "If there are an even number of 1 bits, flip the rightmost bit".
      • Otherwise (if the rightmost bit of the number is 1) flip the rightmost 0 to 1, and flip all 1s to the right of it to 0 (i.e. ripple-carry). In the Graycode version, this corresponds to "If there are an odd number of 1 bits, flip the bit 1 to the left of the rightmost 1."



      Example

      Start with
      0 0 1 0 1 1 0
      
      Since there are an odd number of 1s, we are at the "odd" step of the iteration.

      So we find the rightmost 1 bit.
      0 0 1 0 1 1 0
                ^
      
      find the bit 1 to the left of it
      0 0 1 0 1 1 0
              ^
      
      and flip that
      0 0 1 0 0 1 0
      


      Then we flip the rightmost bit (even iteration).
      0 0 1 0 0 1 1
      


      Now find the rightmost 1 bit.
      0 0 1 0 0 1 1
                  ^
      
      and flip the bit 1 to the left of it.
      0 0 1 0 0 0 1
      


      Then we flip the rightmost bit again.
      0 0 1 0 0 0 0
      
      and so on.
    • 2013-08-14T09:11:00.000-07:00
    • (link)


    • 2012-10-23T14:37:00.000-07:00
    • (link)
  • Sphere-based drive trains
    • During a recent idle moment, I started thinking about spherical wheels, spherical gears (I think that one might want to use 3 spheres in order to transmit rotational motion from one sphere to another, as opposed to 1 conventional gear for the same purpose) and (most interestingly) omni-directional electromagnetic motors with spherical (rather than cylindrical) shafts.

      Here is (roughly) how the motor would work :

       Make the "shaft" a sphere (d'uh) and cover the surface with a pseudo-random pattern of permanent magnets.

       Put the sphere-shaft in spherical bearings, and surround it with a regular pattern of controllable electromagnets.

       At every time-step, solve for the set of magnetic fluxes (or electric currents) on the pattern of electromagnets to best apply the appropriate delta rotation to the shaft. Or, simpler yet, render (draw) the pseudo-random pattern of magnets on the shaft, rotated by the desired amount, using the controllable electromagnets.

      Presumably some clever electrical engineering could measure and/or estimate the orientation of the spherical-shaft by the induced current on the outer electromagnet coils (at least while the shaft is moving). Or, hey, one could just draw a recognizable pattern on the sphere-shaft and use light sensors/emitters to estimate shaft orientation.
    • 2012-10-20T18:50:00.000-07:00
    • (link)
  • Scripting, processing.org, images, video, etc
    • So, at my new job, a bunch of my coworkers are "designer-programmers." Usually these are people who have training in art or design and who teach themselves how to program (often quite well). As one might expect, many of them shifted into C++ and OpenGL after first whetting their appetites with processing.org (which I will simply call "processing" from here on).

      Every once in a while I'll run into a quick one-off scripting task whose output (or input!) is an image or a video asset, and my first instinct will be to start looking up scripting language wrappers around things like PIL and ImageMagick. Whenever I express a thought in this vein, invariably, one of the designers will say "Why don't you just use processing?" or "Processing can do that!"

      And it turns out that they're right. I have conceded that, even without a strong knowledge of processing, processing is a better tool for quickly programmatically generating images and video than many of the more conventional scripting languages out there (including Ruby and Python). There are two reasons for this. The first of these is that processing gets out of your way, and lets you call visually-related API functionality without having to do a bunch of imports, or the equivalent of "system.out.println instead of printf". The second is slightly more subtle. Processing is structured around the idea that you'll have a setup, a draw, and an event loop, and that the "draw" will draw things, either every frame, or on certain events. It is absolutely amazing how much more natural this is than "print stuff out" for tweaking and debugging programs whose output is (primarily) other visual artifacts.

    • 2012-04-12T00:36:00.001-07:00
    • (link)
  • Open Source
    • I am currently open-sourcing or thinking about open-sourcing three pieces of software
      1. The simulation platform I used for most of my research in grad school. You can run it and see the results at http://alumni.soe.ucsc.edu/~mds/cclsim/.

        Currently I am in the process of talking to the appropriate people at the University of California about how to do this, as I wrote most of this in the process of research for the UC. Hopefully it'll turn out that language in one of the federal grants I received forces me to open it, or that at least it'll turn out to have so little commercial application that opening it is not a problem.

      2. The library and platforms used to generate most of the projects at http://alumni.soe.ucsc.edu/~mds/?News=collapse&Other=expand&Demos=expand#Demos.

        While these were developed when I was a graduate student, most of it was done before I had any research grants, and all of it was done on my own equipment.

        My main hesitation about opening it up is that I'll have to publicly admit to having used "glVertex3f" as late as 2004/2005.

        For what it's worth, a snapshot of the code is at http://www.club.cc.cmu.edu/~mds2/old_ucsc_gl_source/

      3. The source for http://alumni.soe.ucsc.edu/~mds/spacecraft_sim/.

        While this project was built on top of the libraries described in the second item, this particular integration *was* done as part of official university research. I'm hoping that the fact that I did it on a NASA grant means I am forced to open it, but because it was done as a University researcher, I am pretty sure I have to go through a different path to open it up.

    • 2011-12-15T21:48:00.000-08:00
    • (link)
  • Shared map : others may find it useful
    • Back when I was looking for postdocs in 2009, I have to admit to having had a geographic bias, and was particularly interested in Universities in Southern California which might offer postdocs related to my research area.

      So I decided to experiment with a little-used Google Maps feature, and create a "map of PhD-granting Southern California Universities." It is of somewhat little use to me at the moment, but I thought I'd share anyway, just in case anybody else with a two-body problem needs such a list.

      Map of Southern California Universities
    • 2011-05-16T16:15:00.000-07:00
    • (link)
  • View of the San Joaquin Valley, including fruit/nut trees and aqueduct
    • 2010-10-05T18:28:00.000-07:00
    • (link)
  • Hyperbolic Paraboloid Buildings
    • Ever since I first stumbled upon them in "A Visual Dictionary of Architecture" I've been fascinated with the concept of hyperbolic paraboloid buildings.

      Essentially a hyperbolic paraboloid is a structure which, in some frame, is described by the equation

      z=x*y

      If you slice it along constant z you get

      1~ x*y or y~1/x or x~1/y

      If you slice it along a line y=k*x you get

      z=k*x*x or z=k*x2

      But the part that makes it useful as a surface for buildings is that if you slice it along constant x or constant y you get

      z~x or z~y

      in other words, a straight line.

      For this reason, it is called a "ruled surface", and a framework for it can be made entirely out of straight beams, as in the pictures from this site

      http://www.savetrees.org/Hyperbolic%20Paraboloid%20roof%20shelter.htm



      One problem with such a surface is that, while it is easy to construct the frame out of straight beams (and the resulting structure is known for its strength), constructing the roof to fit into the frame is non-trivial, especially if the surface must be hard.

      It is *possible* to construct something that mostly fits snugly over it by cutting pieces out of a cloth tarp (as shown in the above link), but any patch of the true surface of the object is curved (as well as any line which is not constant in either x or y)

      One solution might be a "concrete tent" similar to what is proposed in this article
      http://www.wired.com/science/discoveries/news/2005/03/66872 (start with a "tarp with sections cut out" and brush wet concrete over it. Then let the structure harden)

      Another alternative might be to layer a fine mesh of straight wires along constant x and constant y, and take some goopy filler material, such as plaster, and brush it over the wire mesh. I wonder if something like this is how they make permanent buildings with hyperbolic paraboloid roofs. Some of the images of such structures certainly look like a wet material was painted on top of a mesh and then allowed to dry, although larger hyperbolic paraboloid structures, such as the Catholic Cathedral on Gough Street in San Francisco, are often clearly made from piecewise surfaces (float or otherwise). I think to have a piecewise surface of flat segments perfectly conform to a frame of straight segments in such a roof, the segments have to be triangles (and highly irregular ones at that!)
    • 2010-08-21T13:24:00.000-07:00
    • (link)
  • This time last year


    • This time last year, the mountains around Los Angeles were on fire. At 15 miles away, you could see the flames sometimes in broad daylight. And at night a 30 degree arc of the skyline would glow red. One of my favorite nights in Los Angeles was the time I sat out with a bottle of cheap red wine just watching as the sky burned around me.

      Side note : the way I found the time-lapse video of the high voltage tower construction
      http://mikeschuresko.blogspot.com/2010/01/really-cool-time-lapse-video.html that I posted around this time last year was by looking for videos of the Station Fire, and finding this one
      http://www.youtube.com/watch?feature=iv&v=jR_3N7nVPw8 by the same guy. I like the music in his videos, but I feel that they don't adequately capture the scale of the fires or how the sky turned red from the other side of the city or how everything smelled like a smoky oak campfire.
    • 2010-08-19T17:22:00.000-07:00
    • (link)
  • Re-posting an interesting idea.
    • Re-posting
      http://robotmonkeys.net/2010/07/22/bullet-train-hopping/

      Jonathan writes:
      "Jianjun Chen in China proposed an interesting idea for eliminating station dwell times for trains. In his/her design, each train has a detachable boarding shuttle mounted on the roof of the train. Passengers who wish to disembark leave the main passenger compartment of the train, and enter the shuttle. Meanwhile, embarking passengers board an identical shuttle already located at the station. As the train approaches, the shuttle mounted on the train, disengages so it can slow to a stop at the station, while the shuttle is grabbed and mounted onto the moving train."
    • 2010-07-27T15:31:00.000-07:00
    • (link)
    • http://earthobservatory.nasa.gov/IOTD/view.php?id=44717&src=eorss-iotd

      You can see Santa Cruz (and the rest of redwood country) on this map.
    • 2010-07-21T17:28:00.000-07:00
    • (link)
  • A manifesto in the form of a sequence of seemingly unrelated ideas
      • Programming is a social activity. Sure there are cases where you are working with a fixed set of libraries you know well, and you're trying to generate something interesting within this vocabulary of thought, but most industrial coding seems to consist of cobbling various libraries together.
      • Computing is a social activity. (no explanation needed)
      • (completely unrelated) Things that only the "dorky kids" did in my generation are the exact things "all the kids" seem to be doing in subsequent generations.
      • Programming is the fundamental activity people do with computers, much like "driving" is the fundamental activity people do with cars
      • Learning to program only *seems* hard because we try to force students from zero to programming-literacy in one college semester, then fail them if they fall behind on the interesting stuff that depends on that basic literacy. Imagine if we spent a year slowly teaching programming for every year we taught kids about reading, writing or arithmetic.
      • Analogy between modern professional programmers and ancient scribes.
        • Elite group of educated scholars
        • Write using needlessly difficult technology (either "C" or "All caps and no punctuation")
        • Mostly write things like "So and so owes the king 15 sheep" / modern day business software
      • Analogy between programming and other basic intellectual tasks.
        • Perhaps in the future being a "professional programmer" will be as weird as being a "professional writer" or a "professional mathematician" today
        • But, likewise, everyone will need to do a little reading/programming/arithmetic
        • There may be many "high school programming teachers"
      • Printing press analogies
        • Looking at computers and the internet to day and getting interested in them as "fascinating machines" is like looking at the first printing presses and becoming obsessed with the screw mechanism.
        • Omitting the rest of the analogy
    • 2010-07-19T18:58:00.000-07:00
    • (link)
  • Robot household cleanup and RFID
    • What if every thing in your house that wasn't trash had an RFID tag?

      Your household swarm of cleanup robots could potentially pick up every item. If the item doesn't come with an RFID tag, it goes in the trash/recycling/etc. (telling these apart might be hard). If it does come with an RFID tag, there's a household database that tells your robots where the thing goes when it gets put away.

      I suppose this wouldn't solve all the problems with "program robots to clean your house" but it sure seems like a big first step. (It might be worth testing any proposed solution to the "robots clean your house" problem with the "the baby just defecated on the floor" thought experiment)

      (food items would be an interesting issue, the RFID tag would have to have some sort of expiration information, and when the food gets thrown out / composted/recycled, the RFID tag has to be removed for later reuse. Things like shampoo bottles and toothpaste tubes present similar issues.)

      P.S. If any readers find similar ideas posted elsewhere, please post links to the more interesting write-ups in the comments section.
    • 2010-06-22T15:27:00.000-07:00
    • (link)
  • The right approach, I think
    • http://www.oxygen.lcs.mit.edu/Overview.html

      Having glanced through the websites of several "ubiquitous computing" research groups, I think that the Oxygen project at MIT is the closest I've seen to "something on the right track" Partly I occasionally track / glance at what's happening in this field because I think it could become an interesting thing to be involved in in the future. Partly I keep looking at it because I have this sneaking suspicion that, when we start getting to pervasively embedded small computers, HCI will start looking like a robotics problem. And partly I'm interested in how the network connectivity for pervasive computing might work : perhaps the distributed robotics and ad-hoc network people have a head-start on some of the issues that might come up.
    • 2010-06-22T14:08:00.000-07:00
    • (link)
  • Central Valley Airport
    • Why I think it would be reasonable to construct an international airport in California's Central Valley (pending the construction of at least the Central Valley portion of the California High-Speed Rail project)

      1. Its flat (so you can arbitrarily scale it up with new runways) Of course the current solution to overcrowding at SFO and LAX is to increase runway space at outlying airports like SJC and Ontario -- one of which is already in the central valley, while the other one is along one of the proposed high-speed rail lines (albeit the part of the line least likely to be built soon -- most sensible discussion thinks high speed rail will connect to CalTrain/Metrolink, and CalTrain will take care of the San Jose --> San Francisco leg)
      2. High speed rail will make a hypothetical Central Valley Airport accessible from the current major metropolitan areas (not useful if you're flying to LA, but what's an extra hour of train ride if you're going to Singapore?)
      3. One of the parts of the Schwarzenegger agenda I actually sort-of-agree with is the (unstated) intent to push urban/suburban growth into the Central Valley, most likely along the CA-99 corridor. The housing bubble stopped this somewhat, but the infrastructure/empty housing is in place, and a chain of UC campuses stretching from Davis to Riverside could spur job growth.
      4. Part of me secretly wants to teach at UC Merced. I predict that, as time goes on, UC Merced will rapidly become a top engineering school, (making it even harder for me to do this). High speed rail and a massive airport would make it easier for an urbanite like me to live there.
      5. Unrelated, but I'd like to curse the Southern California software industry for locating primarily in the public transit deadzone of Santa Monica. Although I'm told that there is a new Expo Line they are building between downtown and Santa Monica. I'd like to remind everyone that the massive Southern California megalopolis is "train scale" and not "car scale."


      (edited later to add)
      One of the drawbacks to this plan (and plans involving "Central Valley growth" in general) is that the Central Valley is already being used for agriculture (and is fairly unique agricultural land). According to Wikipedia,

      "The Central Valley is one of the world's most productive agricultural regions. On less than 1 percent of the total farmland in the United States, the Central Valley produces 8 percent of the nation?s agricultural output by value: 17 billion USD in 2002."

      Presumably some of it can be diverted from agricultural use, but it could radically change our food production if it were to entirely convert over to sprawl.
    • 2010-05-27T21:36:00.000-07:00
    • (link)

For more blog items, see http://mikeschuresko.blogspot.com/.
More
Photos/Links/Personal/etc.
img
Photos
Pictures of people/places/things
mountainsmountains modified_mandelmodified_mandel tet_at_leestet_at_lees
waterfallwaterfall sf_nightsf_night dimsumdimsum
MikeSFMikeSF bigeyesbigeyes mike_sdmike_sd
museum_outsidemuseum_outside mike_age_22mike_age_22 cafe_outsidecafe_outside
CDC08
Pictures of IEEE CDC 2008, Cancun, Mx.
cloudy_beachcloudy_beach palm3palm3 more_of_the_beachmore_of_the_beach
some_sort_of_exhibitionsome_sort_of_exhibition funny_moneyfunny_money stylish_cell_towerstylish_cell_tower
palm2palm2 palm1palm1 they_always_ran_out_of_coffeethey_always_ran_out_of_coffee
blurry_nightblurry_night cancuncancun
  • I keep running into Karl Obermeyer at conferences.
  • I am friends with some of the other graduate students in the AMS department and the Computer Science department at UCSC.
Info
Contact info / Resume.
  Michael D. Schuresko
703.785.4637
 
Research statement  
Michael.Schuresko @ gmail.com
 
Michael Schuresko
Baskin School of Engineering
University of California
1156 High Street
Santa Cruz, California 95064
      Address available upon request.
Mountain View, California