I was incredibly lucky enough to recently do a live broadcasted interview where Parham (op) talked about his experience in tech, his country (Iran), and even showed us some of his tools to write and debug code. It was an incredible interview and can be viewed here:
At least in the part of the demo I watched, I didn't hear his screen reader. But I know that it's difficult to make a screen reader's speech output audible over a VoIP program like Skype, without causing a nasty feedback loop. The last time I attempted it, several years ago, I had to use a USB audio device in addition to my computer's built-in audio output, with an analog audio cable between the two.
By the way, what version of Windows do you use, Parham? I ask because it looks like you're using the Windows Classic theme, which isn't available in versions after Windows 7 as far as I know. I know that using this theme used to improve compatibility with screen readers like JAWS and Window-Eyes, but as far as I know, it doesn't matter with NVDA, though I suppose using the simpler theme would still reduce CPU and memory usage.
Yes, I'm using Windows 7. I use that theme because my screen reader is more responsive the less CPU and memory usage I have. So for example, if I open a heavyweight IDE, the fact that it's using so much CPU will make my screen reader lag.
What I think stinks is that companies seldom thinks of accessibility. In many cases, there are more people in need of special accessibility tools than support for lower web browsers, yet all focus is on the latter when it comes to web development.
I wonder what is stopping accessibility from being as popular as backwards compatibility... none of the resources I used when I started learning web development ever mentioned it (aside from alt text in the image tag). You really have to seek out information about it purposely.
There are several things that stop accessibility from being popular, here are the few I think about :
- developers generally don't need it, can't easily test it, and aren't even that aware of it
- "accessibility" can mean a lot of things. Are we talking about blind persons, daltonism, low visibility, limited motor skills, or something else I don't even know about? Each one has their challenge, and it's difficult to address all of them on purpose
- accessibility is often regarded as "niche" by stakeholders (unless one of them requires it or you really have a big scale), and thus not important. Unlike backward compatibility, it's not easy to have accurate numbers on how many of your users require it too.
That is not to say accessibility is not important.
Generally, the first steps to get it right is NOT to design anything specific for them, but instead to adhere to standards, semantic HTML (or whatever your platform use if you do a native app), have enough constrast, big enough fonts, and take user-settings into account (generally it's as easy as not overriding anything like zooming and scrolling).
The good thing about accessibility is if your app/website is accessible, it also benefits all your other users, in some ways.
Fascinating. We programmers live by our eyes, and I've had a few frightening flights of fancy about losing my sight, and what employment would be like in this field. I still wonder though how you'd go about reading other people's (bad) code, since there isn't any prior memory about what it was _supposed_ to do, and how you perform major refactors that just appear so "visual" to my way of thinking, e.g. hollowing out a function body into two smaller functions and adding a tail call.
Actually, a lot of things you may perceive as visual are not visual, at least if you imagine "visual" as something I (a blind person) cannot understand.
I do a lot of refactoring, maybe even more than you do, simply because I can't understand functions that have two indentation levels (like two if statements inside each other), or functions that go above 15 lines. I'm saying that I may do it more than you because, unlike sighted people, we can't glance at 6 lines at once; we have to look at the code one line at a time. So, the smaller each section (whether it's an if statement, a loop, or a function, depending on the language itself), the better for me.
I'd like to believe, given that the largest part of the brain is devoted to visual processing, that it gets freed up for memory\creativity etc.
Given the right kind of guidance and development people with "disabilities" could bring a very unique perspective and new ways of thinking to problems and algorithms.
Deaf people for example think visually from birth as they have never heard sound. And we still don't even have a good way of representing sign language on paper or digitally.
This is really one of the points that always amazes me. We have all this potential, right here on earth. Why are we looking for aliens? Why aren't we spending more time in knowing how people with different senses missing understand a particular concept?
This occurred to me while I was struggling with linear regression and its use in machine learning.
>e.g. hollowing out a function body into two smaller functions and adding a tail call.
Actually theres's a direct analogy there. The reason you do that is to make it easier to comprehend. It looks messy. To much info. Now imagine you have to listen to your screen reader speaking all of it. Same deal. To much, time to refactor to something more navigable.
Yes, I also have the fear of losing my sight when I'm getting older.
I hope I could code even I'm 70 or 80 years old. It's not just for make living, also for dignity.
I wonder if it might be worthwhile for sighted developers to set up a system that a blind person would use and then blindfold themselves, and start using the system...
Perhaps that would make it easier for us to develop software that works well for blind people?
Part of the problem is that most accessibility software has a hell of a learning curve, that the blind have implicitly overcome already to use the thing at all. (e.g. emacspeak) It can be very daunting to try to use it as a sighted person with a blindfold over their eyes.
Have blind people overcome this because of past experience and living with the fact of being blind, or is it the case that the software isn't necessarily intuitive in the first place?
I think when I have a moment I might attempt this as an exercise...
If I were to see this moment, I would wonder how you can bear using a mouse when the keyboard is so much quicker. However, you can probably use the mouse much faster than I can.
Why? It's probably the power of habits. Using a keyboard-mouse combination to interact with the computer is how you've always worked. Using a screen reader with a large number of specialized keystrokes is how I've always worked.
I've had a tiny, tiny amount of experience with using NVDA through work and the first hurdle is the incredible number of keyboard shortcuts to learn. The only other tip I can give is to unplug your mouse and give it a go.
I do not, myself, know why this is the case. I can come up with several plausible reasons; for example, you're basically coming up with an alternate input device for your computer piggybacked onto something that's already there. Additionally these programs basically are the computer if you're blind; you develop a lot of experience with them quickly because the alternative is not using it at all. And some amount of it is probably cultural/historical, we make it hard because it's always been hard. Truly, I don't know, and I don't know anyone who does.
I imagine that if we tried to make programming by voice more practical, that would help solve part of the problem. A few developers have made it work but it requires a lot of setup.
i think coding by voice is solving a different challenge than blind developers are facing. Blind developers have no issue with manipulating a keyboard. I feel the most immediate accessibility issue facing blind and low vision developers is quickly being made aware of context changes and making sure the element you expect to have selected is actually selected.
An IDE is pretty much useless to a blind developer if the screen reader cannot adequately relay the current context to the developer.
But what about making it work with existing tools? My u der standing is that when developing software there are a variety of things you must implement so that these tools work well.
I'm interested because LibreOffice has an accessibility layer which I've never explored but that is cross platform.
As a blind programmer I am thrilled at the progression of Amazon's Echo. The accessibility functions are all too often overshadowed by issues of privacy.
Lauren Milne and Catherine Baker are both graduate students of his, so I'm going to guess that he was the professor mentioned. He is also one of two professors at the "Taskar Center for Accessible Technology":
Good article. Only when we first had a visually impaired end-user asking for accessibility is that we understood what was wrong.
As result, the next generation of our tooling runs from the command line without a mouse. Solved two problems, 100% accessibility (no mouse involved nor graphs, just log files or console output) and automation (make things easier to deploy without user input).
Accessibility doesn't need to be an after-thought. When planned ahead can actually make tooling simpler.
Blind people had exceptional memory because they've done intensive mental exercises and use their memory a lot that they had surpassed how a normal person thinks.
Great post as it made me realize how ignorant I am at times. The idea that it must be really hard to understand data visualization, graphs and some chunks of mathematics if you're blind has been in the back of my head but I never really thought about the real life implications.
Thank you for the post and the linked "Autobiography of a Blind Programmer".
I also noticed the "15 minute read" under the date. Haven't really seen that much but it must be pretty valuable if you're blind since you can't estimate the reading time from glancing at the scrollbar etc.
[Even though I guess this could be automated as a feature in screen readers with a wordcount?]
This quote from the end of the article is a fantastic general attitude towards life:
"""There are a few things I love about being blind. One of them is the need to carve out your own path most of the time."""
how would a blind programmer deal with something like python? it seems that having indentation be part of the syntax would make it difficult to figure out, unless the screen scraping software was made to understand same indentation = same block
This was a problem years ago. however, nowadays almost all screen readers have a setting to report the indentation of the current line. This was popularized when NVDA (a Windows screen reader) and Orca (a screen reader for Linux) were written, since they are both in Python.
thanks for chiming in, I was really wondering about it
btw, if you read this, do you have any recommendations for ways we can make sure our sites are accessible? Say for my blog I have alt= tags for everything and tried to organize things meaningfully in terms of H tags, I would like to also add aria tags but I am not sure how I can check that they were done correctly, as even if I installed a screen reader it I don't have any experience with it so I am not sure if what I hear is correct or not.
(Not the OP, but I have experience in this area too.)
You're on the right track. As a finishing touch, you should add role="main" to whichever element contains the main page content (in this case, the article or the most recent articles). That way, screen reader users can easily skip past all the navigation links and other auxiliary elements that are common to every page.
You can also use some accessibility evaluation tools. I would suggest that you go here, and check the box that says "WCAG 2.0" to get some tools that check accessibility in general:
If you fix stuff that these evaluation software show you, I think your blog will be among the very well thought-out websites in terms of accessibility. WAI-ARIA is for times that your application does a lot with Javascript, such as updating a region of text, or when you start using non-semantic elements for stuff they are not meant for (for example using the i tag for an icon, or the span tag as a link, etc).
From my personal experience, it's useful if the editor has at least some basic support for this kind of blocks, e. g. enter key opens a line with same indent level etc. And editor scripting also helps, then you can write a script toreport the indent level on demand. Of course, many screen readers can report the number of spaces/tabs (e. g. the raw value), but you usually want only the level with respect to the current codebase and most likely only sometimes e. g. not on every line movement, you can infer the blocks sometimes from context and recheck only if needed/things go wrong.
One particular point that seems obvious after reading, but would not have occurred to me: If you ask a sighted person to imagine a nearby and distant object (say, a car), the distant object is imagined as smaller than the nearby one. But that distance / visual size relationship is a totally visual thing because you're imagining seeing the objects; a blind person's mental model of the object would be built from touch and is the same regardless of distance. They don't have the visual perspective cues. (If the object make sounds, I suppose it's imagined as quieter?)
To take that a bit further, I'd guess that blind people's metal models of geometry, whether it's a small object or navigating within a building, is purely orthographic in nature. They wouldn't have any use for our ideas of visual perspective, field of view, lines of sight, or any of that. Artists didn't work out how to replicate it properly until the renaissance, so it's not as though people have an innate conscious grasp of how it works; we just understand perspective as an innate part of our sight.
And on the other hand, the blind probably have a much more developed model of spatial acoustics: changes in reverberation as you pass an open doorway, move from a corridor into a larger space, etc., which our brains for the most part don't learn to build.
That is very true. My colleagues had a hard time explaining how one can draw in 3D on a 2D medium, like a board. It was there that I figured sighted people see things differently when it's away from them. It's different than how I hear the sound. At least, it hasn't been researched as widely as there has been research done on sight.
Blind people likely understand space far better than sighted individuals. But computer desktop geometry is something entirely different, and not something that's easy to describe.
In any case, it's never a good idea to question someone with a different set of abilities than you have with a "is it really that hard?" Their experience far outweighs your naive intuition about what's easy and hard for them.
> In any case, it's never a good idea to question someone with a different set of abilities than you have with a "is it really that hard?" Their experience far outweighs your naive intuition about what's easy and hard for them.
Wise words, certainly. I apologize if my tone came across as condescending. I asked out of genuine curiosity, as I figured that someone who did not rely on their eyes could have a different, but not worse, perhaps even better understanding of space and spatial relations than someone who does (accounting for individuals being unique, and all that).
But perhaps I'm being blinded by my own eyes. After reading the article wlesieutre linked to, I believe it might be the case.
I'm genuinely curious how you would think of three dimensional objects without ever being able to see them...
I guess you would have to describe them. As a suggestion - and not an attack in any way because I'm not sure how I'd do it! - without reference to anything you can see with your eyes, how would you describe space?
The thing with 3 dimensional objects is that you get a lot of those in real life. All it takes to show a blind person what 3 dimensional looks like is to pick up a keyboard, show its different sides, and describe that "This is the height, this is the width, and this is the depth."
What? How would you think of three dimensional objects without ever being able to feel them? (I say as a someone that's got reasonable vision, but surely sound and especially touch is much better at spatial positioning that mere faulty "stereoscopic" vision -- there's a reason why you might want to practice martial arts with a blindfold...).
Good point! But is this done without any reference to 3 dimensional objects, or does it take the principle and not make any assumptions, or describe it in such a way that the concept is explained without any aspects related to sight?
It is ... difficult for me to describe just how dangerous it is to try to use 3 dimensional intuitions in a higher dimensional space. As the number of dimensions increases the amount of "space" there is for things to get weird get much more complicated; for example any knot that exists in 3 dimensions can be untied trivially in a 4 dimensional space.
Suffice to say if you try to use your ordinary everyday life intuitions doing higher dimensional mathematics you will continuously embarrass yourself until you learn not to. Some folks develop the necessary adjustments and have higher dimensional intuitions; I do not and have to do the math anywhere higher than around 4.
You could take a linear algebra approach and explain dimension as the number of possible independent directions of movement. This could be experienced through touch. Start with a pencil, then a piece of paper, then 3-space, and so on.
> I'm genuinely curious how you would think of three dimensional objects without ever being able to see them... [...]
> without reference to anything you can see with your eyes, how would you describe space?
But the thing is: I wouldn't have to. People without sight live in space too. What mental models of the universe anyone has inside their head may vary wildly, but at the end of the day a torus is still torus-shaped when the lights are out, and West is to your right when you turn North while upside down.
https://www.livecoding.tv/parham90/videos/z8MoB-how-do-blind...
edit: you may have to fast forward about 17 minutes in to get past the intro buffer.
Also part 2 is where all of the demo stuff is:
https://www.livecoding.tv/parham90/videos/nD1Er-how-do-blind...