I know this is innocent (maybe not) but who approved this at fb... maybe don't frame this as building AI that will capture kids attentions. Why does it even need to be framed like this... it could be stick figures or ANYTHING. Its comical.
I imagine they might just be detached enough from reality to imagine that this would actually raise their image - doing something family-friendly or whatever.
You can rest assured that the true doom will come from a place you didn't plan for. The traditional pf knowledge is to diversify so that any number of events wont adversely affect your financial future. I think the future will present more channels of diversification from the traditional stock and bonds but the principles remain the same.
I'm impressed with OpenAI confronting this head on.
"Our model, despite being trained on a curated subset of the internet, still inherits its many unchecked biases and associations."
If these models find themselves into production environment - if they are good enough and profitable enough - they will eventually become legacy systems quietly perpetuating the biases of past times.
What's interesting about this to me is that ultimately we _want_ our AIs to learn bias. The whole point of a predictive AI is to model the behavior of the thing its predicting. So for AIs trained against humans, which CLIP is, it by necessity must learn our prejudices. If it didn't, it wouldn't be good at predicting how we describe images.
The model learning bias isn't the issue. You could ask me what I think the racist members of my family might write about a given image. I'd then be able to emulate them inside my head and accurately predict their responses. We all do that. It's how we have moments like "I knew you were going to say that."; "That's typical of you to say."; "Why am I not surprised?"; etc. The fact that I, and everyone else, can do that does not imply that we are biased. It's how we behave that determines if we are biased.
We want our AIs to do the same.
The real ethics question here is not how to we prevent AIs from learning bias. It's how do we get AIs to not _express_ those biases. We need a way to put them into "impartial" mode, much like we take biased and fallible humans and make them judges in courtrooms.
Personally I don't think that's going to be as hard as some imagine. Again, remember that these AIs are learning to emulate humans, _including_ judges. Give GPT-* a bunch of court documents and transcripts and it will learn the capacity to emulate a judge. Then you just need to carefully craft its prompt text so that for any given query, you can be reasonably sure it's acting impartially.
I think that's the challenging part about bias. If you can make that discrimination (no pun intended) then it's a feature which you can control.
The irony is that the black bias mentioned in this paper is probably due to inherent bias in image processing algorithms of the sensors themselves. Look up the "Shirley Card"
I am interested to know if it's possible to correct biases after training without resorting to retraining and training data curation.
As for prompt engineering for gpt, it feels a bit like reading tea leaves. I'm not sure if it is possible to know for certain that a specific prompt will elicit the desire all the time.
I wouldn't say it's only past times. You would be surprised how many people still hold those opinions (not only white supremacist types). Internet just brings out the things the people sometimes think about, but do not express in their everyday lives (unless they're under some kind of pressure or stressed).
So these models for now can be treated as a very reductive snapshot of humanity (at least English-speaking segment), warts and all
If I was meeting a co-founder here I'd probably try something like video-chatting with them, every day, for a week straight. Inevitably you'd end up talking about just about everything, and hopefully getting a real feel for how each of you are as a person.
I have opinions on what the future looks like. I'd like to hear yours and see if there is overlap in our opinion of interesting customer segments and problem domains we could tackle together
Professionally, I am in leadership on the business side of a AI/ML start up. It's going well but I can't shake the feeling that I'm not actually solving any problem. I'm ready to take the dive on my own.
I would be interested in partnering with someone that considers themselves an Engineer regardless of what that looks like from an execution side.