I'm not sure it would happen in such an obvious way. It would probably happen in a way that "makes sense" for most of those people, and not from a "let's manipulate the votes" perspective.
I'll give an example. The establishment corporate media is already biased. That much has been clear this election (although it's been happening for many years or even decades).
The Internet is "pretty free" right now, and people can Google, or go on Twitter and see "news" that doesn't necessarily come from that establishment media. However, that is already starting to change.
Google, Facebook, and Twitter can all change that by giving "brand name" media much better ranking results through their algorithms. In some cases, such as with the recent "AMP" project, these companies won't even have to rely on good search ranking. They just benefit from having partnerships with Google and being shown at the top of the page automatically. What this means is that the corporate media's brainwashing continues almost unfettered going into the future.
The upside is that people can still look for other sources and views on the Internet, while they can't really do that on TV. So in that way the Internet is still better (as long as it doesn't start implementing "ISIS & others filters" with a new US president. But the portals through which most people get their information are getting tighter and tighter relationships with establishment corporate media, and voter education will suffer for that.
Ideally, what I'd like Google to do, unless it actually wants that kind of "bias" on its search pages (whether it comes from the results or otherwise), is for them to use their increasingly more advanced AI to figure out which elections stories are "biased" and fact-free, and which are the closest to the facts and use the least biased language in them.
"Journalism" is supposed to be about the facts, and Google search should be promote that kind of stories over the biased hit-jobs. And Google should also verify those facts itself with its AI, so it shouldn't just promote stories with false "facts".
It's not going to be easy to do something like that well enough, but I think Google could do it, and I think AI has reached a point where that may be within its grasp within a few years.
It's the old problem that is also evident in public (as in state-owned) tv broadcasting. Should the news programs and journalists take sides and endorse the viewpoint that they believe best reflects facts? Or should they just report the various viewpoints that are present in society regardless of their value?
In the first case, you run the risk of having a journalist using a public resource to advance his own point of view. In the second case you just end up being an echo chamber for what the politicians (who each represent a share of public opinion) want to say. Moreover, if you adopt the second approach, how much space do you give to each opinion, to avoid over or under-representing some? Proportional to the share of votes of the politician expressing it?
I'm not saying there is a solution. But the idea that the value of opinions can be established objectively- or even determined by an algorithm- is a bit naive.
I'll give an example. The establishment corporate media is already biased. That much has been clear this election (although it's been happening for many years or even decades).
The Internet is "pretty free" right now, and people can Google, or go on Twitter and see "news" that doesn't necessarily come from that establishment media. However, that is already starting to change.
Google, Facebook, and Twitter can all change that by giving "brand name" media much better ranking results through their algorithms. In some cases, such as with the recent "AMP" project, these companies won't even have to rely on good search ranking. They just benefit from having partnerships with Google and being shown at the top of the page automatically. What this means is that the corporate media's brainwashing continues almost unfettered going into the future.
The upside is that people can still look for other sources and views on the Internet, while they can't really do that on TV. So in that way the Internet is still better (as long as it doesn't start implementing "ISIS & others filters" with a new US president. But the portals through which most people get their information are getting tighter and tighter relationships with establishment corporate media, and voter education will suffer for that.
Ideally, what I'd like Google to do, unless it actually wants that kind of "bias" on its search pages (whether it comes from the results or otherwise), is for them to use their increasingly more advanced AI to figure out which elections stories are "biased" and fact-free, and which are the closest to the facts and use the least biased language in them.
"Journalism" is supposed to be about the facts, and Google search should be promote that kind of stories over the biased hit-jobs. And Google should also verify those facts itself with its AI, so it shouldn't just promote stories with false "facts".
It's not going to be easy to do something like that well enough, but I think Google could do it, and I think AI has reached a point where that may be within its grasp within a few years.