Hacker Newsnew | past | comments | ask | show | jobs | submit | huula's commentslogin

Nice, https://huu.la implements a similar idea, but for web pages -- for anyone that's interested.


Cool work! I'm very interested in this topic. Just wondering, how good does it generalize your training data other than just remembering strict input-output mapping?


The bootstrap version generalizes with 97% accuracy on a new image. Because the vocabulary is limited, you can train the model overnight. To make the model generalize with all the HTML/CSS markup you need significantly more compute.


Mapping screenshots to code is not hard. By having the model simply memorize the screenshots to code mappings of the training data can give you almost 100% accuracy (for some demo). What is hard is if given a new screenshot, how would this model generalize. To have something work for mobiles is a much easier task than having something work for other more complex UI though. Looking forward to seeing more updates on this!


Yep, for example, more dynamic UI such as tables, list of components (for example kanban swim lanes), etc...


Nice work! I've been trying to pick up some App Script stuff too. Where did you learn it?


I don't think I am using App Script. I just started off with Google Sheets Javascript Quickstart guide. And it's pretty well documented: https://developers.google.com/sheets/api/quickstart/js.


Great article. Covers a lot of aspects on how ML can help designers be more creative and productive!

At Huula, I'm a firm believer that ML can automate various parts of web designs. We just released a new experiment CSSToucan[1] to auto color texts on web pages with Recurrrent Neural Networks. It learns to color texts on web pages without a single line of color theories in the code. All learned from the data. Hope to see more and more ML powered design tools emerging!

[1]: https://huu.la/ai/csstoucan


Wednesday Special!


As a DIY drone builder for 5 years, here are my two pennies.

Arduino lets you program it. Apm and multiwii are essentially arduino. Video is a bit tricky, but you can always transmit it back to your laptop and process however you like


5 years??! That's amazing. Do you have a blog about your projects? That would be an interesting read.


There's no people sitting in the vehicle in all these videos. Is 300km with or without load?


They've never done a manned flight[0]. Presumably they expect 300km with typical load, but they haven't actually demonstrated it yet.

[0] https://lilium.com/mission/


Good work! Training on vector pictures instead of rasterised images seems such a good way to go. With some related data, I imagine this can also be colored.


Now nobody talk about the "dating app" image wechat held in its early years, which brought it a lot of users.


Yes. Shake and Message in a bottle features were used in that context in the early days.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: