Cool work! I'm very interested in this topic. Just wondering, how good does it generalize your training data other than just remembering strict input-output mapping?
The bootstrap version generalizes with 97% accuracy on a new image. Because the vocabulary is limited, you can train the model overnight. To make the model generalize with all the HTML/CSS markup you need significantly more compute.
Mapping screenshots to code is not hard. By having the model simply memorize the screenshots to code mappings of the training data can give you almost 100% accuracy (for some demo). What is hard is if given a new screenshot, how would this model generalize. To have something work for mobiles is a much easier task than having something work for other more complex UI though. Looking forward to seeing more updates on this!
Great article. Covers a lot of aspects on how ML can help designers be more creative and productive!
At Huula, I'm a firm believer that ML can automate various parts of web designs. We just released a new experiment CSSToucan[1] to auto color texts on web pages with Recurrrent Neural Networks. It learns to color texts on web pages without a single line of color theories in the code. All learned from the data. Hope to see more and more ML powered design tools emerging!
As a DIY drone builder for 5 years, here are my two pennies.
Arduino lets you program it. Apm and multiwii are essentially arduino. Video is a bit tricky, but you can always transmit it back to your laptop and process however you like
Good work! Training on vector pictures instead of rasterised images seems such a good way to go. With some related data, I imagine this can also be colored.