Have you thought about assistive technology/accessibility tasks as well? Would love to use such a device to control the touch screens on inaccessible coffee machines at my clients offices for example that I can't operate without sight. I'm sure there are way more examples of such things.
Throwing complex robots at inaccessible devices is not the proper solution, but by far the most quick and practical one. Not in the US, so not even able to buy one and I'm also hesitant to buy something that is totally bricked when the company/cloud goes under.
That's a great idea! We thought about in the context of elder care where they could ask the robot to perform a task for them, but we first need the models to be a little better - hence why we start here, to collect the data before it spreads further.
And by the way, we already have the app that you can use to control the robot at distance, so you can use the skills you taught it remotely as you make it navigate your home!
On the fact it would get bricked if the company goes under, note that our agent runs on other clouds so it's very easy to run if the company goes under - we would open-source it. But if you're not in the US we can't easily ship it to you for the first batch anyway :)
> We thought about in the context of elder care where they could ask the robot to perform a task for them, but we first need the models to be a little better - hence why we start here, to collect the data before it spreads further.
I hope you continue this work for the foreseeable future, because this would be such a boon if it all pans out well.
Thank you, yes there's a lot of positive that can come out of this technology, and it needs to be developed with the help of everyone in order to get there
Braille tablets/multiline braille displays are finally coming. The traditional piezo-electric cells are not well suited for them, due to the space the cell + driver+ electronics need. Putting the four rows of pins in a cell does already require staggering the elements. If you disassemble a cell, you'll see that every row has another pin length the bottom row usually has the longest and the top row the shortest pins. So this is not very scalable for multiple lines of dots.
There are two commercially produced techniques for multiline braille displays now: the first comes from Dot (a Korean company) who makes the Dotpad, this technique is also used in the Humanware/APH Monarch which will be an Android based standalone braille tablet. The other technique is created by Orbit Braille and they have their Graphity braille tablet. I couldn't find any good technical documentation that describe how these methods work exactly, so if anyone has any pointers I would like to read more about it.
I disagree. To "solve" their CAPTCHAs I had to register by providing a working email address. I don't encounter HCAPTCHA problems that often, so when I need to solve a new one usually my cookie has expired and I have to reopen the link to get a new one before being able to continue. I just store such links in my password manager, but imagine you have to find an email with the link they sent you over a year ago before being able to just continue on with what you were doing. And even then, depending on ad blocking/privacy settings their cookie may not even work.
I think this whole thing is a big hurdle just because I'm unable to solve visual puzzles. Besides, having a company collecting email addresses of people who are disabled in one way or another and giving them an identifying cookie is a privacy/data disaster waiting to happen.
That being said, I think the audio alternatives for visual CAPTCHAs are also unacceptable. Even if you can hear them, they may be hard to solve especially if they are not provided in your mother tongue. I think we can and should be able to do better by now
Totally agree. Buying new appliances is hard if you are blind. It's either very cheap stuff that still has buttons, expensive stuff with touch screens which are unusable to me or expensive stuff that has touch screens and an app to control it. Besides not willing to fiddle with a phone and an app for every single action, the expensive stuff with apps will be unusable sooner or later when the software stops getting support and the appliance is still there.
Marijn has done much to make CM6 an accessible code editor. Now with Monaco (editor component of VS Code) and CM6 both being accessible, online code runners/playgrounds and interactive code examples in docs/courses are almost guaranteed to be accessible when they use one of those editors. This eliminates a big accessibility issue when you are learning to code and rely on assistive technologies.
Yes, a nested list would clearly be the better structure here. Or even a description list (dl) with the comment's metadata in a dt and the comment in dd.
My thought exactly. An open source version could be nice in it's own right or lead to good improvements on other open source speech synthesizers such as Espeak. I think current TTS research/software is only focused on sounding nice and human. This is cool if you want to replace a human voice with something computer generated, but not ideal if you want an efficient speech output that can convey as much info as possible as fast as possible. Predictability is key here, I could proofread a text in Dutch (my native language) with ETI Eloquence set to English and just by the sound of certain letter combinations I would know if there is a spelling mistake, couldn't do that with any other "better sounding" synth.
The engine behind that add-on, Axe core, can be called from JS and there are some open source tools around to integrate it in your CI. I would say Axe is kind of the gold standard at this time when it comes to automated accessibility testing. Not because it catches the highest number of issues, but if it flags something you can be pretty sure it's a real issue and not a false positive.
Before doing screen reader testing on complex web components, what I see as some kind of lack box testing where you test your whole screen reader + browser stack, it is useful to have a look at what the browser passes to a screen reader. Especially Firefox has a very nice accessibility tree panel in the devtools these days. In my experience, the more visual tree that is shown there is also easier/faster to read for users that are not blind and are not that quick when using screen readers.
Also, keep in mind that something that technically works correctly with screen readers is just the beginning. User testing might reveal lots of issues you wouldn't think of yourself. And yes, I know that resources are usually limited and there is not much room for user testing, especially testing with screen reader users and other groups that have some kind of disability. I recently worked as the accessibility lead of a mobile COVID exposure notification app that had a very simple UI and a hard accessibility requirement. We had the luxury to do extensive user testing and even in this simple interface we found lots of small changes that improved the experience for screen reader users.
Yes, I would like to publish some lessons in the future somewhere. However, a few quick takeaways:
* The microcopy matters, a lot. We had a button stating "I've got a notification: read what you should do after getting a notification" (from the top of my head and freely translated from Dutch, we didn't have an English translation back then). This was part of a bunch of buttons on the main screen that all gave information. Some screen reader users got confused and thought that they had a notification. If you don't see the visual layout, it is not obvious that this is just a plain button and not a bold text in red that is giving you a warning.
* In the same category: the app has a status text that says "The app is working fine" or "The app is not working fine". Visually, the error state is signified by an exclamation mark and styling that makes clear that this is a serious issue. However, in text there is just one word, not, to signify that there is a serious issue. Following WCAG, the info signified by the exclamation mark icon was available in text, so no text alternative was required. However, we gave it a text alternative anyway to ensure screen reader users were also clearly alerted that something is wrong. Same goes for the "all is ok" icon, we gave that one a text alternative as well to ensure users all is fine.
No time to read the whole thread right now, but feel free to get in touch (email is in my profile). I'm blind since birth and have had various software development jobs. These days I shifted a bit and started my own company doing digital accessibility consulting.
If you'll become totally blind (e.g. need to transition to a screen reader some day), I would advise you to leave the Mac platform. The built-in screen reader seems good at first, but falls down in complex work. Support for web browsing is suboptimal (Firefox is a no go) and the screen reader is only updated in the regular OS X release cycle. This means bugs will stick around a long time and it's totally unclear what the status of a bug is. Also, hackability of VoiceOver is limited. I find that a must for a tool that I am 100% reliant on.
I'm very sympathetic to Linux and run it in many places (Raspberry pi, home server, some stuff on VPSs), but I think Windows is a better accessible desktop experience now. Microsoft is trying tu push accessibility hard in most of their projects, this is often lacking in open source projects. Even if OS projects want to do a good job at accessibility, they usually miss the manpower of knowledge to do so. Especially given Docker and WSL (Windows subsystem for Linux), it is easy to run Linux-based development workloads on a Windows box.
My editor of choice these days is VS Code. That team is also very active on the accessibility of their editor. I use the free and open source NVDA screen reader. If something in NVDA is broken, I can at least look at their Github if any work is being done and if needs be throw in a few patches myself.
So, summing up I would say: find out a set of accessible tools to do your job, learn them before you get blind. Relying on vision until the very latest moment will give you an enormous productivity hit when the switch to 100% screen reader use comes (based on my experience training low vision and blind users in a previous job).
From what I've seen from the thread, others have already touched on some advantages of being a blind coder. You'll get a better mental model of your code out of necessity and depending on your team/employer you can be a more valuable team member because you also bring knowledge of software accessibility.
Have you thought about assistive technology/accessibility tasks as well? Would love to use such a device to control the touch screens on inaccessible coffee machines at my clients offices for example that I can't operate without sight. I'm sure there are way more examples of such things.
Throwing complex robots at inaccessible devices is not the proper solution, but by far the most quick and practical one. Not in the US, so not even able to buy one and I'm also hesitant to buy something that is totally bricked when the company/cloud goes under.