Recently I wanted to make a few simple apps that would make api requests and control a few things around my house set up with raspberry pis.
A friend recommended me the drag and drop app builder Thunkable[1] and I was blown away by just how easy it was to make something. In a couple of hours I had all the apps I needed. No complicated build process, just download the apk from the site. Some people might turn their noses up at it because it uses scratch for writing the code but I feel this is what people's first experience of coding should be like. Write a simple recipe, get a simple program that they can use straightaway. These that must be an app or a web app to have any relevance to people.
Thanks for the recommendation! This is very much what I've been looking for. Easy GUI, super simple to use, but can go deep and get it to do most things I'd want. After several false starts looking at the different app options, this is so refreshing :)
I've done a bit of Salesforce dev as I'm CTO of a small startup (~12 headcount currently). The Process and Flow builders are a breath of fresh air to allow me to do simple logic based programming tasks which respond to events and data changes on the platform. It's great! I have 0 desire to learn Apex and our requirements are moderately complicated but easily encapsulated by what you can do with the configuration frameworks and logic-based visual progamming system. 100% think more platforms should have this.
Well that's one piece of software I'm never going to use. I can't even browse their site without creating an account. I presume I also need an account in order to use the software. No thanks.
"Home-cooked" software is such a lovely analogy. I've used the same comparison to explain my unease about ad-hoc spreadsheets being replaced by domain-specific software [1]. Often the new software is "better" just like restaurant food is "better" than a home-cooked meal, but it's also great when people can build small-scale, super flexible software that works perfectly for just their own needs.
I loved TFA's callout to HyperCard, like your own argument about spreadsheets being "flexible generic tools".
I feel like IFTTT is approaching the same level of utility, though it exists in a space where it's incredibly vulnerable to change against its users' interests.
A few more tools like spreadsheets and HyperCard, and a whole lot of people with home-kitchen level of software competence, could free us from a lot of expensive restaurant dining. And we'd be more self-sufficient, to boot.
What's it take to get HyperCard back? Or something like it? Not just functionality, but ubiquity -- being a tool that every kid is one click away from, nothing to install, no permission to ask, just start learning...
For the ubiquity you need it to be free and extremely simple to install. Apple gave HyperCard away for free preinstalled on its machines. Ideally, you would persuade major manufacturers to do the same with your proposed replacement (good luck with that!).
Technologically, you need representations of 8 concepts:
- card: a container that appears on the screen, holds fields, buttons, backgrounds, and scripts, and handles events.
- background: a container that goes in a stack and can contain fields, buttons, backgrounds, and scripts. Every card with the same background contains the same background contents.
- field: a container that holds text strings. It can be styled. You can control whether it's editable and whether it scrolls.
- button: a clickable widget that runs a script. You can style buttons and give them text to display.
- script: a small program that you can attach to any of the other objects, to be run when a specific event occurs. Scripts can refer to the other objects (and also other scripts) by name, ID, or containment path.
- event: a representation of a user or runtime action that can trigger scripts to run. Examples are mousedown, mouseup, mousemove, open stack, close stack, open card, close card, keydown, keyup, and so on. Events are named. Scripts can be defined on these names and will be executed when an event with a matching name occurs.
- stack: a conceptual container that holds all the other objects. Each stack represents something like both a document and an application. Scripts can refer to stacks by name, ID, or pathname in a filesystem (you might want to use URLs).
- messagebox: a universally-accessible repl window running a top-level loop that accepts and interprets script statements and expressions, and prints their results. The effective scope of the messagebox is always the scope of the currently open and active card, background, and stack.
Cards and backgrounds are layered, visually. The contents of a card always appear on top of the contents of its background.
There are also layers within each card and each background. The bottom layer contains graphics (in hypercard that meant a single layer of black-and-white paint pixels), and the other visual objects are stacked on top of the graphics in the order they're added. You need some kind of picture-editing tools to make these graphics. HyperCard had a sort of mini-MacPaint built in for this purpose.
Each stack represents a containment hierarchy: the stack contains all the other objects; backgrounds and cards contain fields, buttons, backgrounds, and scripts. The stack knows the container of each object. When an event occurs, a sequence of objects has the opportunity to handle it, starting with the object in which it occurs. For example, if you mouse down on a button, the button is offered the event first, then its card or background, then its stack. The containment hierarchy acts like nested lexical scopes. Scripts can always see the entities that are present in the object to which they're attached, plus all of the nested containers that contain it.
You need a file format that can store all of these things conveniently. HyperCard put all of them into a single file per stack, which was especially convenient. To the user, the file was the stack. Simple. That's what you want. An obvious choice would be to store a stack in a sqlite file, but be prepared for a lot of work on the schema to get things laid out and working right.
You need a scripting language in which all of those objects mentioned above are first-class named classes or prototypes that can be instantiated. HyperCard used a purpose-built language called HyperTalk, which was designed to approximate colloquial English for the sake of approachability. You probably don't need to do that. You could probably use something like Lua or Python. You would need to build a comprehensive library to represent and operate on all the standard objects.
You need a top-level program that runs the stacks. Stacks should be designed to be self-contained, so that the top-level program doesn't need anything else to run a randomly-chosen stack. Stacks can refer to each other, but if you make a stack that does that and try to use it in a context where you've forgotten to include the other stack, that's your lookout.
The problem is that the tasks we ask of our software are harder - we now expect them to talk to other computers, not all of which really want to be talked to by random people writing one-off bits of software.
> not all of which really want to be talked to by random people writing one-off bits of software.
That's a bigger problem than it sounds.
There are two main reasons software doesn't want to talk to each other. One, preventing vandalism and general abuse. Two, rent seeking. The software is wrapped around something you need, and it'll only allow access on its own terms, and after an appropriate fee.
I dream of an Internet where this isn't the case. In particular, I dream of an Internet where I don't have to choose between either writing scrappers, or entering separate relationships with companies for every little bit of information I want to fetch via an API. Those relationships are a problem and a liability, even if they don't incur any actual monetary costs.
Beyond just being annoying noise that you have to keep track of, relationships make software non-shareable; they force you to centralize. If I make my "home-cooked" app that needs to know the weather, currency rates and be able to derive geographic coordinates from addresses, that's three separate APIs, three separate relationships. I'm not going to ask any of my users (whether five or five million) to get their own API keys, get their own relationships with the API providers. I'm forced to put a server in front of the app, introducing a single point of failure, shackling the app - and more importantly, I'm forced to force the users to enter a relationship with me. Their copy of the app is now dependent on my server not going down, and me managing my relationships with API providers.
Cooking recipes people share never force you to use a particular ingredients supplier. They don't even care where you get the ingredients - whether you grow them, buy them, or receive them as gifts. I wish software was more like this.
I've done something very similar for my wife: She likes reading otome visual novel games on Android and the game design/monetization strategy means that it's difficult to "go back" to a story line.
I noticed her repeatedly screenshotting her phone and asked what she was doing - she said she likes to re-read the stories and so she swipes through her gallery to satisfy this need.
So I built a simple wrapper around adb, built a plain C# WPF front end for the wrapper and passed through clicks and automated screenshot calls for her. So she plugs in her phone, fires up the app, and she can read her VNs on her PC and have screenshots saved on the PC for her.
I built a variety of other processing apps to solve other needs for managing these files.
There were some free-to-download apps that solved a lot of these problems but none in quite the right way, and a lot of them looked fairly sketchy.
I don't really have any intention of distributing the software I wrote because it's so specific to her needs, and at best I might throw it up somewhere so I have the source backed up, as every once in awhile she asks for something new to add to this program.
Reminds me of the QueryStorm story; apparently the project started because the author wanted to help his girlfriend with her work (which involved lots of data and Excel files).
Now, some of you may not ever write computer programs, but perhaps you cook. And if you cook, unless you're really great, you probably use recipes. And, if you use recipes, you've probably had the experience of getting a copy of a recipe from a friend who's sharing it. And you've probably also had the experience — unless you're a total neophyte — of changing a recipe. You know, it says certain things, but you don't have to do exactly that. You can leave out some ingredients. Add some mushrooms, 'cause you like mushrooms. Put in less salt because your doctor said you should cut down on salt — whatever. You can even make bigger changes according to your skill. And if you've made changes in a recipe, and you cook it for your friends, and they like it, one of your friends might say, “Hey, could I have the recipe?” And then, what do you do? You could write down your modified version of the recipe and make a copy for your friend. These are the natural things to do with functionally useful recipes of any kind.
Now a recipe is a lot like a computer program. A computer program's a lot like a recipe: a series of steps to be carried out to get some result that you want. So it's just as natural to do those same things with computer programs — hand a copy to your friend. Make changes in it because the job it was written to do isn't exactly what you want. It did a great job for somebody else, but your job is a different job. And after you've changed it, that's likely to be useful for other people. Maybe they have a job to do that's like the job you do. So they ask, “Hey, can I have a copy?” Of course, if you're a nice person, you're going to give a copy. That's the way to be a decent person.
Sounds nice, except programming is difficult and akin to magic for most people. Achieving what‘s proposed here, namely wanting to get a job done, but your requirements for it are different from how your friend uses it, is usually achieved by having a program that takes a lot of configuration parameters and then does the job according to them. IMO that‘s the more fitting analogy.
> except programming is difficult and akin to magic for most people.
So is cooking to quite many of them, myself included. I find programming easier than cooking - because although it takes much longer to achieve anything, it also doesn't cost anything on the margin, you can pause the process at any time, and you don't risk hurting or killing yourself.
> is usually achieved by having a program that takes a lot of configuration parameters and then does the job according to them
The ultimate form of "configuration parameters" is the code itself. Phrased alternatively, configuration is just code in a non-turing-complete language. Code is data is code.
There is a gap in tooling, there are currently no good Hypercard-like tools that would allow to make "personal software" and share it as recipes. That's perhaps because computing is still in its inflation phase and there's too much platform diversity; hopefully that will change in some way in the future. But lack of necessary tooling doesn't mean the vision is wrong, especially a vision that was true in the past.
Professional software engineers often struggle to get their development environment up and running quickly. My wife is a historian and does a lot of work with R. The process of getting the development environment working and keeping it working was a nightmare. "What the fuck does that error message mean? When I Google it nothing comes up. What do you mean I have the wrong version of python? What the fuck is a PATH variable?"
With cooking there are entire stores dedicated to selling you things that you can use. Almost every home comes with a working "cooking environment". Even just making it so you could recompile some software if you wanted to is way way way way beyond the expected capabilities for a typical person, especially since a tremendous amount of OSS code is not portable and built for linux while most people have windows boxes.
Philip Guo wrote a great post several years ago under the title "Helping my students overcome command-line bullshittery"[1] that seemed to get somewhat mixed but mostly positive reception. Much of the negative reception seemed to be chained to sophomoric arguments originating from folks stuck in the second panel of the glowing brain meme who wrongly thought of Guo being stuck in the first.
The real truth behind the mess we're in[2] is that there is a ubiquitous, universal runtime that almost every computer comes equipped with, and the problem lies with the folks responsible for those ecosystems who either don't see these things as problems, or somehow believe that what the future somehow holds is native support for R/Python/what-have-you in the browser.
Tooling is a massive problem, though, and one that the browser vendors themselves don't seem to care to get right. (Although there is the Iodide project, in part supported by Mozilla.) And it really doesn't help that the browser realm has come to be conflated with the NodeJS community because they share a common language.
I've written a fairly thoughtful post[3] before, tying these two topics together:
> After finding out where to download the SDK and then doing exactly that, you might then spend anywhere from a few seconds or minutes to what might turn out to be a few days wrestling with it before it's set up for your use. [...]
> the question is whether it's possible to contrive a system (a term I'll use to loosely refer to something involving a language, an environment, and a set of practices) built around the core value that zero-cost setup is important
Neither of you is wrong. There's two basic models that I see.
Some people never cook, and only eat food from restaurants. They only use their kitchen as the place to store leftovers in the fridge and reheat them in the microwave. The way they get food is to have experts make standard items for them, and maybe sometimes ask the expert to customize it slightly.
On the other end of the spectrum are people who cook every meal for themselves. It's not that hard to get started, and the consequences of failure are pretty low. Various people like to do it because it's fun, or cheap, or social, or they have special requirements, or whatever.
Software is exactly the same. Some people want only standard pre-built units from experts, and want to pay those experts to build, customize, and deliver it for them. Other people think it's {fun/cheap/social/necessary/whatever} to build and customize their own software, and don't want this to be solely the domain of experts.
RMS is clearly in one camp, or if it's a spectrum then he's all the way on one end. It's true that a lot of software today is pre-built units from experts, with only minor customization possible. That wasn't always the case. RMS is presenting a vision of a future where we're not eating all our software from restaurants.
Having seen how childish and uncooperative corporations are with software, I think it's a great vision and I'm all for it.
sure the requirement differ, but so do circumstances under which a meal has to be cooked (differing number of consumers, allergies, different properties of ingrediences requiring changes,..). Cooking can be difficult as well, see french cuisine.
I was thinking of Unix command line tools. Every one of them comes with a buttload of parameters and some of them make me think 'why would you ever need this?!' :)
Actually, you have the same liberties with software as you do with recipes. The only problem is that it is vastly more complex, so you can't really use the liberties.
You are not allowed to copy a recipe you found in a book. However, you can understand the concept and reproduce it in your own words. The same is true for software: you can't copy any source code that you see, but you are allowed to understand the source code and rewrite it in your own words. However, that is rarely doable, even for trained professionals.
Also, just like in food, while there are people who want the recipe, a lot more people that just want to eat, and don't have any desire to see the recipe, even less to change it. The restaurant industry is a lot larger than the recipe book industry.
So overall, the metaphor is perfect, but Stallman is drawing the wrong conclusions from it, and forgetting the huge complexity difference between the most complex food recipe and the simplest useful program.
A more charitable reading takes this quote to highlight the disconnect between what people expect to do with things (recipes), and what is actually allowed legally.
If you told someone they can't legally copy their neighbor's or grandparent's recipe, or heck, even one from a random website, they'd be very surprised, because this is what's actually happening.
Now, with this reality in mind, Stallman's argument makes a lot more sense and reaches the conclusion: software resembles recipes in practice, but the law is incongruent with this practice.
The law about copyright on recipes is more complex. Certain kinds of recipes (e.g. ones where the description of the steps is somewhat lyrical) are probably copyright-able, and so are collections of recipes, such as cookbooks. Photographs accompanying recipes are also copyrighted, so if you explain your recipe in pictures or video, that is also copyrighted.
> You are not allowed to copy a recipe you found in a book.
I believe that to be incorrect. You are, in fact, allowed to copy the facts of a recipe. Copyright does not apply to recipes. The same goes for, for instance, clothing designs. (This is why expensive brands make designs with their logo plastered all over it; the logotype is covered by trademark law, even though copyright does not apply.)
I absolutely love this comparison. Making computers do what you want using a core set of skills, personal taste, some equipment, and a set of reference works is absolutely analogous to cooking at home for the family.
I feel a bit like this touches on operating systems as well. After a decade with macOS — away from Debian, ion3, evince, Firefox, and to some extent the command line — I get that same feeling of overexposure one might feel after eating out for every meal. Restaurant food is polished and diverse but it’s overly tasty and the options for customization are coarse (no pun intended.) You can choose a dish but not how it’s made.
The analogy with home cooking makes me want to go back to a computing environment where I get to put the ingredients together that work best for me even if it isn’t something that’s perfect enough to sell to other people. Hackable window managers aren’t the be all and end all but they are a good example of what I’m talking about. Time to buy that $100 Thinkpad, I guess.
I remember the fantasy that more and more people would be able to develop simple basic apps for their own very personal customized needs. It was a vision of a more accessible and democratic compute infrastructure, where we'd all be 'makers' creating the compute environment we lived in. Things like hypercard were part of this vision.
Those days are gone. The article doesn't explicitly mention the fact we all know: Very few people have the capacity to create a 'home-cooked meal' like the OP, for their own use.
In fact much fewer than could create a little hypercard app. It's a world where there are pretty large barriers to most people being software 'makers', and instead we are just consumers who can choose from what is offered to us by various companies trying to figure out how to monetize our needs and desires.
Part of it is the increased complexity of app development. Part of it is the walled gardens of our suppliers.
> I distributed the app to my family using TestFlight, and in TestFlight it shall remain forever: a cozy, eternal beta.
Indeed. Because the very infrastruture ignores the needs of one-off "home cooked meal" apps, it assumes they don't exist, you have to pretend it is some kind of "beta" of a mass market commodified product instead (and be glad our suppliers still let you do that for 'free'). Our computers (increasingly the phone/device is people's primary computer) are no longer ours to do with as we will.
It is sad. If those who created the computing revolution 30 years ago (including prior versions of us) could see where it wound up.... the level of technological sophisitication of our pocket audio-video-geolocated-self-surveilling-communicating devices is astounding; the lack of 'empowerment' or control we have over them is utterly depressing.
This is so pessimistic! What is the basis for thinking there are large barriers to most people becoming software makers?
I was in high school 12 years ago, when iphones just hit the market at like 800 bucks each and there was no firebase or react native or Medium articles or youtube tutorials covering 1st step code generation commands to deployment for every tech stack. The language of the day was Java. There was no npm and not a trillion python libraries that you can do anything imaginable with.
Is there any specific reason a 16 year old today couldn't make the app this guy did?
As a software engineer I like the idea that I do a black magic that nobody could ever understand, but I genuinely don't think it's true.
> What is the basis for thinking there are large barriers to most people becoming software makers?
You have an idea for an app. How do you build it? Your options generally are:
1) Do it the hard way: get Xcode/Android Studio, spend years learning to a) program, b) deal with the bloated and overly complicated frameworks, c) deal with the bloated and overly complicated build and deployment infrastructure. Also pay the platform owner for the ability to make it work / distribute it to someone.
2) Do it the easy way: pay some service so that you can make the app in a simplified/low-code way. Now your app is tied to someone else's service.
3) Do it the expensive way: pay someone else a lot of money to do 1) or 2) for you.
What's missing is the option 4): build it in some free, low-code tool, distribute it for free to whoever you want, without entering a relationship with any company whatsoever. That would be "the HyperCard for mobile".
On top of that, these days you're likely to want to have capabilities that are available only either through option 1), or through an API, requiring entering another relationship with some other third party.
Closest thing to 4) I've seen on Android is Tasker, with its ability to dump an .apk containing whatever UIs and if-this-then-that rulesets you created. But it's not exactly ergonomic, it's Android-only, and doesn't solve the distribution problem.
I don't see why it would take years to learn to write an app in Swift or Kotlin (or React Native or Flutter), even starting from zero. I'm not trying to be cute here.
There are probably 50 high quality end to end tutorial for building a video streaming app. I guess using Firebase is tying you in to a service, but so what? If it's for 4 friends it doesn't have to be infinitely scalable. Just by writing an iOS app you're tied to Apple, right? And surely an app isnt a failure just because it has dependencies?
If I were a non-programmer, I'd not have time to learn the basics of Swift or Kotlin even if it took only a month, if all I wanted was to build a small app for friends. With Apple I already have to deal with by virtue of owning an iPhone, but I would be reluctant to enter a relationship with another party just for the sake of my small app. On top of that, careful observation of the space tells you that an app is likely to outlive its service dependencies.
Dependencies you don't own are a liability. The less you have of them, the better.
Unfortunately there is an entire generation or two now of mainstream developers who never had the opportunity to experience things like Hypercard firsthand. It is always difficult to describe the full extent of the power of systems like it without being immersed in it, and we don't really have contemporary equivalents to make the case.
12 years ago the author of this post would have had a little website and now he has the ability to record and instantly share video with his family in a pretty delightful app :)
So are you saying the barrier remains the same? Earlier we had less access to stuff, and could write simpler software. Now we have more access, but software has become complex
I think as computers have advanced in power the technology that runs on them increases in complexity to utilize the new power. With economies that prioritize growth, people in charge of things like web standards want new ways to innovate over their competitors. Developers don't want the power afforded by additional computing cycles to go to waste. They also find things lacking, like the difficulty of accomplishing "holy-grail" webpage layouts. Standards are expanded to accommodate these pain points, and new features are added, but nothing can be taken away as this would break things that already work. This continues to the point where the web implodes under a plethora of animated SVG hammers striking anvils and autoplaying videos.
Thinking of it another way, computers of a long time ago could do only basic things like printing strings or drawing lines. Computers of today can still do those things, in addition to a host of other things that are now only possible due to technological advancement, like streaming video. But in some environments it is possible to limit yourself to having the computer do just the simple things - it's just a matter of deliberately opting-in rather than doing the only thing that's possible to do. For example I'm trying to write a "Web 1.0" webpage in the style of early 2000's design like Praystation[1], ignoring the shiny new technologies that do a lot more, because I believe that "more" is not necessarily "better". I believe you have to be ideologically motivated to do this now, because it seems like a lot of average web users have come to expect SPA-style apps with fancy animations and client-side interactivity, and that's where a lot of interest in web design appears to lie at present.
Of course, for a video-sharing app this doesn't really apply because the platform it runs on was proliferated fairly recently. The oldest model in the smartphone lineage came out at the start of 2007, and in order to build apps for it, it was necessary to install a full-blown developer toolchain with visual layouting tools and hundreds of APIs available for use. That's the simplest it can get. In the 1980's you could just 20 PRINT "HELLO WORLD".
I think it would greatly help if the author published the source code, as he briefly mentions considering. At least then it would be possible to judge exactly how technically complex the software is, and people could learn from it also.
However, at the point where you need to rely on complex cloud infrastructure like AWS to build things like video sharing applications, it might turn off anyone not completely interested/invested in app building. Although, I do believe a completely invested/motivated 16-year-old could pull off something similar, given the amount of documentation and free libraries on the web. That's not discounting the difficulty of actually accomplishing such a thing - for a young newcomer it would be necessary to learn many disjoint concepts for such a thing as video streaming. But given a significant amount of effort and motivation, it's at least possible to do in the present age.
> I believe you have to be ideologically motivated to do this now, because it seems like a lot of average web users have come to expect SPA-style apps with fancy animations and client-side interactivity, and that's where a lot of interest in web design appears to lie at present.
A key point the author made in the article was that simplicity is appreciated. Here's an analogy: if Snapchat is the equivalent of an SPA-style site with fancy animations etc, then his app is a static webpage.
Just consider for a moment: the reason these analogies to old technology are even necessary is because what this one amateur did today would have definitely taken a team of engineers to accomplish 12 years ago.
You are absolutely correct, the standards for professional pop tech are always being raised to new levels, just like with every field, e.g. cooking. Nobody is saying opening a successful restaurant is getting any easier. However, amateur gourmet cooking for the family is demonstrably much easier today than ever before in history.
> Part of it is the increased complexity of app development. Part of it is the walled gardens of our suppliers.
And our current culture contributes to both of these issues. We got the kind of computing that fits that culture, which is one that emphasizes profit making over other types of activity, and one soaked through with short term thinking. We need well funded basic research (to create computing media systems in the spirit of hypercard et al) and companies willing to push malleable computing systems out to their customers. Hypercard was extremely popular in its day, and Apple let it die on the vine. It's because they didn't know what to do with it -- the culture had become about shrinkwrapped solutions, and it no longer made sense.
Definitely, as culture is in hefty ways influenced by (and itself influences -- dialectic!) political economy.
> To change this "culture", you have to change our economic model for how we allocate resources in the world.
Yes, but the severity of this change is open to interpretation. Part of the "culture" I was referring to was a business culture that emerged in the late 70s and called for viewing shareholder value as the sole purpose of corporations. This was not necessarily the received wisdom in the prior decades -- decades which gave us places like Bell Labs and Xerox PARC, and the decades in which, arguably, the biggest qualitative leaps in computing occurred.
It's worth noting that most of these leaps in computing come from the ARPA research culture and/or from Bell Labs, the latter of which was a government regulated monopoly and not the case study of a normal private company doing basic research. Ditto for PARC, which was more or less an extension of the ARPA group at a time when the government funding was threatened.
The lessons from this (for basic research in computing) are simple: fund people and not specific projects, while having a general overall vision; fund at the appropriate timescale of half to full decades; don't interfere.
America has public schools, but they mostly suck. I would take a profit driven private school over a public one any day. I would also take dealing with a corporation’s customer service over a government office like the DMV any day.
And while you mention healthcare, the actual problem is that they aren’t profit driven enough and instead exist as this Frankenstein’s monster of a public/private partnership. I currently live in a 3rd world country with no health insurance or government intervention in medicine and healthcare and the system works well(ish). Fixing a broken bone or getting a tooth pulled is only going to set you back $10. And while this is a lot for poor people, it’s still only between 2-10 days salary for an average person.
That being said, there are major trade-offs in quality. I wouldn’t recommend giving birth or getting a life saving treatment here, but for anything somewhat routine, the healthcare provides great value. Also, all meds are over the counter and are very cheap. One med in particular is 100x cheaper here than the out of pocket costs in America.
> in TestFlight it shall remain forever: a cozy, eternal beta.
When I read this I thought -- oh man, I hope Apple doesn't decide to somehow limit this feature in the future. I wouldn't be surprised if it's already against the terms of service somehow to deploy a 'beta' app and then just keep using it forever. (Besides, don't you have to pay to be in the developer program? That may be somewhat acceptable for some adults, but kids whose parents would prefer that their kids "study" instead of "wasting time playing computer games"?)
Anyway, this is one of the reasons I use Android. While becoming root isn't realistic on many devices, I think it allows you to do enough for kids to find their way into programming. Maybe the challenge of getting a root shell/flashing your own OS is enticing to some too. And maybe the tech used to protect devices from their users is interesting to some people too -- and since OEMs need to know how all this stuff works, it also happens to be documented pretty well. (I think this tech is still in its infancy unfortunately.)
What is the equivalent/alternative to "TestFlight as eternal beta" in Android, how do you (or "people") get self-developed "bespoke" "home-cooked meal" software onto their Androids? It's easier than iOS? I am not familiar, I'm a web developer not a device developer.
On Android, you can just download an .apk and install it (you need to change the "Allow insecure apps" setting). No code signing, app store, or corporate approval necessary. This is the #1 reason I will never be getting an iPhone (which is a shame, since iPhones are beautiful devices in almost every other way).
I sort of feel that way, that recent technological progress for the last 10-20 years has made life significantly more convenient for us as consumers, but not as producers. I’m not going to deny that we have better tools now like GitHub and Visual studio, but what’s the equivalent of two day shipping?
There’s a few companies pushing the envelope, whether they know it or not. Notion for example allows people to build shareable webpages with their blocks; if their API ever gets released, you can imagine normal people using it as a CMS for blogs or websites. Another example is Levels.fyi using Google Sheets as a backend; Firebase is hard to use, but Word and Excel aren’t.
It’s difficult to say where this trend will go, but end users shouldn’t be underestimated. We could probably teach more people python by asking them to manipulate numbers on a spreadsheet than with a black terminal and plain text editor.
> There’s a few companies pushing the envelope, whether they know it or not.
Part of the problem is, they're pushing the Envelope-as-a-Service. Notion is all fun and games, until they change something you needed from under you, or get bought and thank you for the incredible journey. Similar for the others.
HyperCard being a desktop product was a feature, not a bug.
> More and more non-CS people are learning to program every day.
We (the computing people) make this much harder for them than it needs to be. Instead of giving them malleable computing systems like Oberon, or Hypercard, Smalltalk, or related, we give them glistening time sharing systems and tell them that "programming" is, really, entering text into a teletype emulator and watching it do things. People are not stupid, but insofar as computing has been dominated by industry and limited by open source, we have given them stupefying options. We have better examples from the past and should know better.
Lack of popularity of these systems has often different reasons than the technical aspects but your comment sounded like the judgment of someone who dismisses some technology as second-rate just because it is not popular.
For every one person learning to program, ten people are "learning to program". I would say that we're going to have a huge problem where most applicants aren't qualified for a given job, but I don't have to - we're already there.
Still, BOTH numbers are trending up, so I guess at least that part is nice.
If we can make software development less of a specific job and more of a companion skill of most jobs, maybe people will start and develop systems for their fellow non-technical co-workers.
We're going the opposite direction though, for good reasons. Have you ever experienced the result of business processes dependent on "hobbyist" systems developed by in-house dabblers? It's not good, and always needs to be cleaned up later.
But it was actually a popular thing in the earlier days of computing, and still happens sometimes.
One of the most likely places for it to currently happen is actually crazily complex Excel "macros" -- basically software apps written in Excel. So on the other hand to my original point, actually maybe those aren't always disastrous (yet?), there are a surprising number of them around powering all sorts of businesses, and their internals would seem horrifying to a software engineer, but they are working...
This is just a result of people having finite time and resources. There are thousands of possible hobbies, and making home-cooked software from scratch is just one obscure one.
For instance, can you make a literal home-cooked meal, without acting as a passive mass-market consumer (no grocery stores or supermarkets)? After all, you're living in the richest part of the richest part of the richest time of the world! Can you raise and butcher your own pig, grind your own flour, grow your own vegetables in your backyard, distill your own spirits, and chop your own wood to build your own fire? Cooking has always been a much more popular hobby than anything technical, and likely always will be, and yet almost all its aspects have been outsourced. Why expect software to turn out any other way?
Yes, but... in my childhood in the 80s, simple carpentry (for example) and other "handyman" type skills was a very popular "hobby" among men (yeah it was gendered) across the nation. Because it wasn't just about what "hobby" seemed "fun" (although it may have been enjoyable for people who did it), it was about control over your environment, being able to make the things you needed instead of relying on having to pay someone else to, being able to customize them to your needs, etc.
Software in today's world is similar. But does not occupy a similar place as a prevalent "hobby" that can produce useful things for your daily life.
Now it's true that such skills even "hands-on" are much less prevalent in younger generations. What the reasons are, I' not sure we have totally identified.
But I'm sure it's not just about considering them as "hobbies", if that means recreational activities people might do because they are "fun", like bird-watching or drawing, divorced from their utility. Carpentry/handyman skills were not so popular because people found them more "enjoyable" than other "hobbies".
This is absolutely a false dichotomy. A home-cooked meal is not one conventionally thought of as being from items hyperlocally sourced. At least, not in the last 70-100 years. We’ve had Sears Roebuck and the like for quite some time.
> A home-cooked meal is not one conventionally thought of as being from items hyperlocally sourced.
Exactly. To allow the category of "home-cooked meal" to exist at all, beyond professionals and a few dedicated hobbyists, we have to loosen the criteria. Similarly, "home-cooked apps" don't exist in the strict sense, but do exist with looser criteria, where we allow people to work with standardized, mass-market tools (e.g. drag-and-drop website and form builders). Expecting lots of people to start from raw source code is like expecting home barbecue to start from the pig.
> you have to pretend it is some kind of "beta" of a mass market commodified product instead
Or you could use enterprise distribution, which is the "correct" route for apps that don't go through the app store. But TestFlight is probably the simplest way to distribute an app to a handful of people and the only real problem with it is the need to distribute a new build every 90 days.
Since Apple really loves to lean in on the “anyone can learn to code” message around iOS (see: Swift playgrounds), it would be cool if they could figure out how to do a family-and-friends version of enterprise distribution. Some official way to let small groups of people build things for each other without having to list them in the App Store for anyone to find.
Imagine a world where the ability to make a personal app like this is actually as easy as making a home-cooked meal; a skill passed from parent to child in the natural course of growing up.
That's pretty much the way life was for me in 1982. Of course, you couldn't make an app 'like this' in the 8-bit days - no networking, no onboard camera, no touch screen - but my Dad & I sat literally at the dinner table and wrote home-cooked apps together. We learned to code together, but he taught me how to make the computer do useful things: as a language teacher he created quiz games that he could take in to use in his high school classroom. I seem to remember designing custom fonts to get the accents right on the French words (did I mention, no Unicode?)
1. Start with an “easy” cookbook. There’s a whole genre of cookbooks for people starting out in their own or newly single dads. One of my first was “Dad’s Own Cookbook”, which has decent recipes that aren’t a huge production. After that, get the “Joy of Cooking”, which has a huge number of recipes from many cuisines. Also, take a look at Julia Child’s “Mastering the Art of French Cooking”. You may not be interested in many of the recipes (e.g., aspic) or even French food, but I found it very helpful to see how a “master” recipe could be forked into many different dishes.
2. When you make something for the first time, follow these rules.
- Read the ENTIRE recipe carefully before you start.
- Follow it exactly. Don’t substitute anything or “add your own twist”. You gotta eat every day, so you’ll get a chance to do that soon.
- Prep everything in advance. Dice the onions and put them in a bowl. Open up the can of tomatoes. Peel the carrots, chop them, and put them in another bowl. Measure the spices out into a shot glass. Yes, you will have to do a few more dishes. Yes, you will have some dead time while the oil heats up or the onions soften. When you’ve more experience, you can start interleaving things but in the meantime, you’ll avoid burning things while you learn how much time and attention various things take.
- Clean as you go. Having a cramped, cluttered counter stresses me out and often leads to mistakes.
- Try new recipes out under low-stakes conditions, not when you’re hosting your boss and girlfriend’s parents.
3. Get a decent knife and learn how to use it. You don’t need a $500 Japanese Santoku, but get something decent and keep it sharp. There are standard ways to dice an onion (etc). Do them smoothly first and the speed will come. If you’ve got friends in food service, they might be willing to help. (Exception: tourné is fairly silly)
4. Have fun and don’t take this too seriously. If all else fails, just order pizza.
Making a worthwhile mobile app in a few hours is still mostly a dream.
When cooking, you make a fresh copy of a meal according to a recipe. A fresh copy of an app is of not much use; it's as if you needed to invent a new recipe and then cook it.
Cooking really tasty dishes takes experience; but the same holds for software engineering.
I think the hardest part of an app is distribution, sharing, etc (I am not talking about being popular, but signing the app, upload to a marketplace, etc). I think theere could be an analogy about cooking for yourself in your ktichen (developing local, the app works on your phone), or cooking for a lot of people in a enviroment which you dont know (a friend's kitchen, etc). You know that i fyou don't use your utensils, or your favourite pan, or your hoven (you know exactly the times and temperature), it won't be the same.
I've cooked lots of tasty, nutritious food in half an hour, and I rarely spend more than an hour cooking. I can only dream of a world where I could write an app in an hour that would be as enjoyable to use as the food I can cook in an hour is enjoyable to eat!
I really identify with his home cook metaphor. I love cooking for folks and often get compliments that I could be a chef. Mostly because I’ve been cooking since I was a child. Ironically I’ve become a “software engineer” in a similar manner, 10 years of just trying things out and making things for friends and family. I do agree we’re getting to a space in software development that’s making it a bit easier to play and experiment with ideas. I think it would amazing if folks could get a BlueApron like expenience and make an app that would be useful
I still think all these details could be solved for with a friendly IDE for beginners. In most cases folks just want to build a thing for themselves or a small group. Right now the tooling is still unfriendly to most but it feels like it’s moving towards something better
Here’s an example, I built a simple app to keep a schedule for watering my various plants that all have different needs. Originally it was just a google sheet but I saw an opportunity to play with ReactNative and expand my knowledge
A lot of everyday people, have spreadsheets like this that could be friendlier. They’re not trying to ship it to thousands just themselves and maybe a friend. Like sharing a meal and then the recipe :)
Something about the personality of a programmer draws us towards that ever-fleeting siren of The Infinite. Infinite scalability, infinite reusability, infinite growth. It's this lust for the Bigness that our code could theoretically reach that causes most of us to pass right by the myriad of opportunities for small, more meaningful, more human software.
Maybe, if you read a lot of Paul Graham essays and are seduced by the siren call of startups and VCbux. But I'm more captivated by the infinite possibilities of the thing sitting right in front of me. And I bristle at machines thousands of times more powerful than my old Commodore actively preventing me from exploring those possibilities without identity management, app signing, hooking up an even bigger computer to my small computer, et weary cetera.
But what do I know, I'm just a lizard person. My small computer is really a phone, an inherently social tool. Its manufacturer didn't intend for me to easily hack it, therefore I've really no right to.
I relate to the sentiments of this article. Smallness is a commodity we programmers forget the value of too frequently.
After reading the article the other week about the person who had their entire life managed in a single text file, I felt inspired to start doing the same. I have a python script that crawls the file and then sorts it all by date, plus a little command-line interface for checking off items or aggregating things by person or topic, but not much more. A text file isn't a good way to store data for a big application, but I know the exact limits of what I want to do, so I'm not going to use a database or anything more advanced than a human-readable .txt file.
So far I've spent more time writing the script than time I've saved by using it, but what else is new for a personal project?
He's the author of "Mr. Penumbra's 24-hour Bookstore" and "Sourdough", a champion of small-press publishing and married to a woman who is an olive expert.
I like thinking about coding with these type of metaphors. Coding, imo, is more like a puzzle. I get cooking but it's a bit too abstract for something as deterministic as most coding is. I do see it applying to the experimental side.
Relating to puzzle...
- Design: you "know" what you want your outcomes to look like
- Software: you "know" the type of pieces you're dealing with (there can be variants)
- Compile: pieces can only fit to exact tab/blank structure(per piece variant)
- Iterate: your entire design is systematic. In complex puzzles, your early decisions can greatly alter future state. How you progress and build in later states is important to.
- Can be good, eg like puzzle-solving strategies that are very specific in the beginning, but enable rapid/seamless building in the future.
- Can be bad, eg it may seem like you're mostly complete in the later states, however, there are holes and holes left ignored are a liability for the acceptable state. You might find that you've incorrectly placed pieces that seemed viable in the past, but leave you in conflicted situations in the future.
Coding is often seen as technical/deterministic as the makers are often technical, and so the mediun is typically used in a fashion that affords scaling and regularity.
There is nothing inherently deterministic about coding as a medium of expression; the fact that the basis is binary doesn't decrease the power of expression we can wield vis-a-vis analog media.
Of course there are constraints of the tooling; but whether you are inherently building something that it makes sense to compare to a puzzle is entirely up to you as a creator.
> There is nothing inherently deterministic about coding as a medium of expression
I don't disagree with this and realize I failed to recognize this perspective... even myself, generally, I internalize & embrace expression through coding.
I metaphorize coding as a puzzle mostly when debugging or designing complex software.
I exchanged a couple of tweets with Robin about this. I think this app is really cool, and I didn't want to get nerdsniped, but it seems to me that this could be built using webapis today. So you could have a cross-platform compatible version that doesn't require code signing, etc. That would be really nice.
Got me thinking how cool it would be if there was then an easy way for anyone to instantiate a copy for their own family. They own the data, they just pay for what little they use of S3 and lambda.
I've been wondering lately if there is any business opportunity in personal IT... More and more technology is entering our homes... We hire people to do plumbing and electrical installations, maybe we should also hire people to do the software...
Its nowhere near the level of everyone hiring plumbers and electricians, but if you’re really interested hunt down some home automation companies (or start your own). If you can find the right clients you can easily make five digits a job setting up Home Assistant to run someone’s new house.
That's a great metaphor, I make apps just for myself or a few friends sometimes and this is exactly how I feel about it.
For instance I made an app called "YourSquare" that's just like FourSquare but 100% private and local to my phone. No real point to it, I could make Google Maps do what it does, but it's a home cooked meal.
I really appreciate the approach of building something simple (in user experience but also in implementation). These days it's so easy to get sucked in and end up spending months setting up Kubernetes clusters and the "that won't scale"-type thinking.
"Home-cooked" software is such a perfect analogy to indie game development.
It resonates with me much more that other terms like "full stack", "Dev ops", and so on.
It nicely removes the guilt of not writing the perfect codebase, as we often have to sacrifice that in order to ship a game.
For me indie games might not be the michelin star earners some of the time, but they warm the stomach. With imperfect charm they offer the delightful experiences just like home cooked patisserie. (edit formatting)
My brother and I built an app together that is only used by the two of us. It’s a simple way to keep a shared ledger between the two of us because we split a lot of expenses as roommates. We’ve been using it almost every day for about 10 months now.
In a way it’s different from the author’s app because our app is over-engineered in some pointless but extremely fun ways. It’s also similar in that it will:
- Never break unexpectedly due to an unwanted app update
- Never spy on us and steal our personal information
- Look and behave exactly the way we want it to, and no other way
I highly recommend building an app in this way. For me it reminded me what I love about programming, and allows me to work on a project unhindered by meetings, project managers, or deadlines. Just like cooking something for yourself, it can be as perfect or as imperfect as you want it to be, as well as an exercise in self-expression.
Anyone interested in creating a "home cooked meal" for Android using scratch-like drag and drop components?
Check out https://appinventor.mit.edu/
It builds an .apk for you and all. It's primarily intended for teaching kids how to code, but seems like it's a good fit for simple apps like the one described by the OP.
A while back I came to realize that not every app I produce (I'm an Indie Mac/iOS dev) has to be polished and App Store-ready. I can write an app for an audience of one - me. Getting an app ready for an App Store easily adds 50% of development time.
A home-cooked app can have hard coded values (oh yeah!), doesn't have to be pretty (although it's more fun), doesn't have to support all platforms (no, I won't add Apple Watch support!), it just has to work for me.
My latest project: a little iOS app that I can use at a bookstore to scan the book's barcode and then checks if that book is available at my local public library.
When I showed that to my SO she was amazed and said: "And you just made THAT?". Well, yes. My secret superpower is that I can make these little computers that we all keep in our pockets these days make anything I want them to.
A few other projects that I made for myself over the years:
- App that generates PDF invoices from my App Store financial reports (I simply paste in the plain text from the website)
- A break timer app (old one was 32bit-only, I didn't bother to download a new one)
- Text snippet app: it scans an email for the senders name and pre-populates possible answers. I turned this into a tiny CRM that keeps track of feature requests and whom I sent an update. This is actually in the App Store, but was a huge commercial failure, but I use this myself e v e r y day.
- Quick click: it's like a command line for my Mac. I type something I see on the screen (e.g. the title of a button), the app highlights possible matches, I press Return and the app sends a simulated mouse click. I can also use this as a window manager to resize the active app, because, why not?
For personal use:
- Portfolio simulator app that calculates every possible IRR for a given number of years (once I was done I discovered portfoliocharts.com, well...)
- Portfolio tracker app to track my retirement savings: this is like a tiny spreadsheet app only that it's fed by 3 csv files (orders.csv, transactions.csv, asses prices.csv). "editing" is done in VIM. The app generates various reports that I use to check if I'm on track with my savings.
- Simple stock checker app (fetches the prices of a, yes, hard-coded list of ticker symbols).
- App to categorize my spendings. Input is a csv file (I like plain text). I wanted to learn now Naive Bayes filtering works and have to say, seeing the app do its thing is as close to magic as I ever got.
- Time lapse screen recorder: creates a screenshot every x seconds. I used this once to record a video of how I'm developing an app.
- Custom implementation of Unshaky (with more stats and settings) to fix the double-space keypress issue of my new MacBook Air
- Alarm clock app that uses the accelerometer to wake me when I'm not in a deep sleep phase.
This resonated with my own personal experience building a messaging app for my closest friends and family. It's been a steady 8-12 daily active users for the past few months.
I'll probably write a blog post in the future about what it does and why we use it. I'm also considering open sourcing the entire project!
I've been wanting an app like that for awhile now. I started messing around prototyping something like that around planning events with others. I'm excited to see what more of this!!
roughly half of that time was spent wrestling with different kinds of code-signing and identity provisioning and I don’t even know what.
It's worth noting that in the early era of personal computing, companies did not view users as idiots to be herded and restricted by their technology, and computer magazines contained source code listings.
It’s not necessary about considering users idiots. As with anything, once a critical mass of users was reached people started creating malicious software, and code signing is a way to help prevent that.
Of course you can run your own self-signed APK on your own phone, nobody is preventing you. But use the Apple App Store as a delivery medium? Come on. Code signing is a bare, completely responsible minimum.
It's also worth noting that in the early era of personal computing, viruses which caused considerable data loss were a commonplace occurrence.
These days, ransomware and other malicious software are very much an ongoing concern. I welcome efforts by OS providers to limit the damage these can cause; the minor annoyance it causes developers is more than made up for by the increase in security for the end user.
Well, the quoted sentence comes from the article. Of course with time it gets easier, but the first time you submit your app to App Store an awful lot of time is spent on the provisioning aspect.
And at a certain complexity level it’s not with printing out reams of minute schematics about the thousands of components inside a modern flat screen that nobody will ever read.
Go get them via PDF on the internet, if available.
I think his point is that back in the day manufacturers were happy to provide the resources needed for users to repair their products (or delegate it to a third-party of their choosing).
This is not the case anymore. You are not going to find a manufacturer-approved source for schematics for most modern consumer products. The only way these schematics surface on the web is by being stolen and leaked by factory workers and this could take considerable time from the release of the product.
Manufacturers also try to corrupt the law to their advantage by lobbying against bills such as the right to repair with completely insane arguments (often involving safety, etc - the typical “think of the children” nonsense) which sadly work against clueless senators (or whatever the people at these hearings are called in the US). You can check out Louis Rossmann’s YouTube channel for recent examples of this.
The "if available" part is the problem. I'd be 100% fine with electronic delivery, if they were simply available.
Having to go to dodgy websites with my browser in paranoid mode just to find documentation to try to keep my gizmo out of the e-waste stream a little longer, is suboptimal.
Modern flat screens are actually far more simple than the TVs of yore; that actually makes them more difficult to repair. You will typically have a power supply (and if not an LED backlight, then a high voltage one as well) and a "T-Con" board, which usually has one honking big SoC on it which is both Vendor and TV specific. There isn't that much to repair aside from the power supply, and honestly you don't really need a schematic for that to repair it.
Have TVs become more complex? I recently peeked into mine, and it had only two major boards, one for power, the other for AV stuff. Not mine, but similar: https://i.imgur.com/1uHsfYI.jpg Hardly anything there, and very easy to replace a blown capacitor or a whole board.
They're more integrated. Each board and chip is doing a lot more than an old TV would have, and you can't really do anything to them other than just swap out the whole thing.
A friend recommended me the drag and drop app builder Thunkable[1] and I was blown away by just how easy it was to make something. In a couple of hours I had all the apps I needed. No complicated build process, just download the apk from the site. Some people might turn their noses up at it because it uses scratch for writing the code but I feel this is what people's first experience of coding should be like. Write a simple recipe, get a simple program that they can use straightaway. These that must be an app or a web app to have any relevance to people.
[1] https://thunkable.com/