Google.gov

0
476

by Adam J. White, The New Atlantis:

Amid growing calls to break up Google, are we missing a quiet alignment between “smart” government and the universal information engine?

Google exists to answer our small questions. But how will we answer larger questions about Google itself? Is it a monopoly? Does it exert too much power over our lives? Should the government regulate it as a public utility — or even break it up?

In recent months, public concerns about Google have become more pronounced. This February, the New York Times Magazine published “The Case Against Google,” a blistering account of how “the search giant is squelching competition before it begins.” The Wall Street Journal published a similar article in January on the “antitrust case” against Google, along with Facebook and Amazon, whose market shares it compared to Standard Oil and AT&T at their peaks. Here and elsewhere, a wide array of reporters and commentators have reflected on Google’s immense power — not only over its competitors, but over each of us and the information we access — and suggested that the traditional antitrust remedies of regulation or breakup may be necessary to rein Google in.

Dreams of war between Google and government, however, obscure a much different relationship that may emerge between them — particularly between Google and progressive government. For eight years, Google and the Obama administration forged a uniquely close relationship. Their special bond is best ascribed not to the revolving door, although hundreds of meetings were held between the two; nor to crony capitalism, although hundreds of people have switched jobs from Google to the Obama administration or vice versa; nor to lobbying prowess, although Google is one of the top corporate lobbyists.

Rather, the ultimate source of the special bond between Google and the Obama White House — and modern progressive government more broadly — has been their common ethos. Both view society’s challenges today as social-engineering problems, whose resolutions depend mainly on facts and objective reasoning. Both view information as being at once ruthlessly value-free and yet, when properly grasped, a powerful force for ideological and social reform. And so both aspire to reshape Americans’ informational context, ensuring that we make choices based only upon what they consider the right kinds of facts — while denying that there would be any values or politics embedded in the effort.

Addressing an M.I.T. sports-analytics conference in February, former President Obama said that Google, Facebook, and prominent Internet services are “not just an invisible platform, but they are shaping our culture in powerful ways.” Focusing specifically on recent outcries over “fake news,” he warned that if Google and other platforms enable every American to personalize his or her own news sources, it is “very difficult to figure out how democracy works over the long term.” But instead of treating these tech companies as public threats to be regulated or broken up, Obama offered a much more conciliatory resolution, calling for them to be treated as public goods:

I do think that the large platforms — Google and Facebook being the most obvious, but Twitter and others as well that are part of that ecosystem — have to have a conversation about their business model that recognizes they are a public good as well as a commercial enterprise.

This approach, if Google were to accept it, could be immensely consequential. As we will see, during the Obama years, Google became aligned with progressive politics on a number of issues — net neutrality, intellectual property, payday loans, and others. If Google were to think of itself as a genuine public good in a manner calling upon it to give users not only the results they want but the results that Google thinks they need, the results that informed consumers and democratic citizens ought to have, then it will become an indispensable adjunct to progressive government. The future might not be U.S. v. Google but Google.gov.

“To Organize the World’s Information”

Before thinking about why Google might begin to embrace a role of actively shaping the informational landscape, we must treat seriously Google’s stated ethos to the contrary, which presents the company’s services as merely helping people find the information they’re looking for using objective tools and metrics. From the start, Google had the highest aspirations for its search engine: “A perfect search engine will process and understand all the information in the world,” co-founder Sergey Brin announced in a 1999 press release. “Google’s mission is to organize the world’s information, making it universally accessible and useful.”

Google’s beginning is a story of two idealistic programmers, Brin and Larry Page, trying to impose order on a chaotic young World Wide Web, not through an imposed hierarchy but lists of search results ranked algorithmically by their relevance. In 1995, five years after an English computer scientist created the first web site, Page arrived at Stanford, entering the computer science department’s graduate program and needing a dissertation topic. Focusing on the nascent Web, and inspired by modern academia’s obsession with scholars’ citations to other scholars’ papers, Page devised BackRub, a search engine that rated the relevance of a web page based on how often other pages link back to it.

Because a web page does not itself identify the sites that link back to it, BackRub required a database of the Web’s links. It also required an algorithm to rank the relevance of a given page on the basis of all the links to it — to quantify the intuition that “important pages tend to link to important pages,” as Page’s collaborator Brin put it. Page and Brin called their ranking algorithm PageRank. The name PageRank “was a sly vanity,” Steven Levy later observed in his 2011 book In the Plex — “many people assumed the name referred to web pages, not a surname.”

Page and Brin quickly realized that their project’s real value was in ranking not web pages but results for searches of those pages. They had developed a search engine that was far superior to AltaVista, Excite, Infoseek, and all the other now-forgotten rivals that preceded it, which could search for words on pages but did not have effective ways of determining the inherent importance of a page. Coupled with PageRank, BackRub — which would soon be renamed Google — was immensely useful at helping people find what they wanted. When combined with other signals of web page quality, PageRank generated “mind-blowing results,” writes Levy.

Wary of the fate of Nikola Tesla — who created world-changing innovations but failed to capitalize on them — Page and Brin incorporated Google in September 1998, and quickly attracted investors. Instead of adopting the once-ubiquitous “banner ad” model, Google created AdWords, which places relevant advertisements next to search results, and AdSense, which supplies ads to other web sites with precisely calibrated content. Google would find its fortune in these techniques — which were major innovations in their own right — with $1.4 billion in ad revenue in 2003, ballooning to $95 billion last year. Google — recently reorganized under a new parent company, Alphabet — has continued to develop or acquire a vast array of products focused on its original mission of organizing information, including Gmail, Google Books, Google Maps, Chrome, the Android operating system, YouTube, and Nest.

In Google We Trust

Page and Brin’s original bet on search has proved world-changing. At the outset, in 1999, Google was serving roughly a billion searches per year. Today, the figure runs to several billion per day. But even more stark than the absolute number of searches is Google’s market share: According to the JanuaryWall Street Journal article calling for antitrust action against Google, the company now conducts 89 percent of all Internet searches, a figure that rivals Standard Oil’s market share in the early 1900s and AT&T’s in the early 1980s.

But Google’s success ironically brought about challenges to its credibility, as companies eager to improve their ranking in search results went to great lengths to game the system. Because Google relied on “objective” metrics, to some extent they could be reverse-engineered by web developers keen to optimize their sites to increase their ranking. “The more Google revealed about its ranking algorithms, the easier it was to manipulate them,” writes Frank Pasquale in The Black Box Society (2015). “Thus began the endless cat-and-mouse game of ‘search engine optimization,’ and with it the rush to methodological secrecy that makes search the black box business that it is.”

While the original PageRank framework was explained in Google’s patent application, Google soon needed to protect the workings of its algorithms “with utmost confidentiality” to prevent deterioration of the quality of its search results, writes Steven Levy.

But Google’s approach had its cost. As the company gained a dominant market share in search … critics would be increasingly uncomfortable with the idea that they had to take Google’s word that it wasn’t manipulating its algorithm for business or competitive purposes. To defend itself, Google would characteristically invoke logic: any variance from the best possible results for its searchers would make the product less useful and drive people away, it argued. But it withheld the data that would prove that it was playing fair. Google was ultimately betting on maintaining the public trust. If you didn’t trust Google, how could you trust the world it presented in its results?

Google’s neutrality was critical to its success. But that neutrality had to be accepted on trust. And today — even as Google continues to reiterate its original mission “to organize the world’s information, making it universally accessible and useful” — that trust is steadily eroding.

Google has often stressed that its search results are superior precisely because they are based upon neutral algorithms, not human judgment. As Ken Auletta recounts in his 2009 book Googled, Brin and then-CEO Eric Schmidt “explained that Google was a digital Switzerland, a ‘neutral’ search engine that favored no content company and no advertisers.” Or, as Page and Brin wrote in the 2004 Founders Letter that accompanied their initial public offering,

Google users trust our systems to help them with important decisions: medical, financial, and many others. Our search results are the best we know how to produce. They are unbiased and objective, and we do not accept payment for them or for inclusion or more frequent updating.

But Google’s own standard of neutrality in presenting the world’s information is only part of the story, and there is reason not to take it at face value. The standard of neutrality is itself not value-neutral but a moral standard of its own, suggesting a deeper ethos and aspiration about information. Google has always understood its ultimate project not as one of rote descriptive recall but of informativeness in the fullest sense. Google, that is, has long aspired not merely to provide people the information they ask for but to guide them toward informed choices about what information they’re seeking.

Put more simply, Google aims to give people not just the information they do want but the information Google thinks they should want. As we will see, the potential political ramifications of this aspiration are broad and profound.

“Don’t Be Evil,” and Other Objective Aims

Google is not a conventional company. We do not intend to become one.” So opened that novel Founders Letter accompanying Google’s 2004 IPO. It was hardly the beginning of Page and Brin’s efforts to brand theirs as a company apart.

In July 2001, after Eric Schmidt became chairman of the board and the month before he would become CEO, Page and Brin had gathered a small group of early employees to identify Google’s core values, so that they could be protected through the looming expansion and inevitable bureaucratization. As John Battelle describes it in his 2005 book The Search:

The meeting soon became cluttered with the kind of easy and safe corporate clichés that everyone can support, but that carry little impact: Treat Everyone with Respect, for example…. That’s when Paul Buchheit, another engineer in the group, blurted out what would become the most important three words in Google’s corporate history…. “All of these things can be covered by just saying, Don’t Be Evil.

Those three words “became a cultural rallying call at Google, initially for how Googlers should treat each other, but quickly for how Google should behave in the world as well.” The motto exerted a genuine gravitational pull on the company’s deliberations, as Steven Levy recounts: “An idea would come up in a meeting with a whiff of anticompetitiveness to it, and someone would remark that it sounded … evil. End of idea.”

To Googlers, Levy notes, the motto “was a shortcut to remind everyone that Google was better than other companies.” This also seems to have been the upshot to Google’s rivals, to whom the motto smacked of arrogance. “Well, of course, you shouldn’t be evil,” Amazon founder Jeff Bezos told Battelle. “But then again, you shouldn’t have to brag about it either.”

Google’s founders themselves have been less than unified about the motto over the years. Page was at least equivocally positive in an interview with Battelle, arguing that “Don’t Be Evil” is “much better than Be Good or something.” But Brin (with Page alongside him) told attendees of the 2007 Global Philanthropy Forum that the better choice indeed would have been “Be Good,” precisely because “ultimately we’re in a position where we do have a lot of resources and unique opportunities. So you should ‘not be evil’ and also take advantage of the opportunity you have to do good.” Eric Schmidt, true to form as the most practical of Google’s governing troika, gives the slogan a pragmatic interpretation in his 2014 book How Google Works:

The famous Google mantra of “Don’t be evil” is not entirely what it seems. Yes, it genuinely expresses a company value and aspiration that is deeply felt by employees. But “Don’t be evil” is mainly another way to empower employees…. Googlers do regularly check their moral compass when making decisions.

As Schmidt implies, “Don’t Be Evil” has never exactly been self-explanatory — or objective. In a 2003 Wired profile titled “Google vs. Evil,” Schmidt elaborated on the motto’s gnomic moral code: “Evil,” he said, “is what Sergey [Brin] says is evil.” Even at that early stage in the company’s life, Brin recognized that the slogan was more portentous for Google itself than for other companies. Google, as gateway to the World Wide Web, was effectively establishing the infrastructure and governing framework of the Internet, granting the company unique power to benefit or harm the public interest. As the author of the Wired article explained, “Governments, religious bodies, businesses, and individuals are all bearing down on the company, forcing Brin to make decisions that have an effect on the entire Internet. ‘Things that would normally be side issues for another company carry the weight of responsibility for us,’ Brin says.”

“Don’t Be Evil” is a catchy slogan. But Google’s self-conception as definer and defender of the public interest is more revealing and weighty. The public focus on the slogan has distracted from the more fundamental values embodied in Google’s mission statement: “to organize the world’s information, making it universally accessible and useful.” On its face, Google’s mission — a clear, practical goal that everyone, it seems, can find laudable — sounds value-neutral, just as its organization of information purportedly is. But one has to ask: Useful for what? And according to whom?

What a Googler Wants

There has always been more to Google’s mission than merely helping people find the information they ask for. In the 2013 update of the Founders Letter, Page described the “search engine of my dreams,” which “provides information without you even having to ask, so no more digging around in your inbox to find the tracking number for a much-needed delivery; it’s already there on your screen.” Or, as Page and Brin describe in the 2005 Founders Letter,

Our search team also works very hard on relevancy — getting you exactly what you want, even when you aren’t sure what you need. For example, when Google believes you really want images, it returns them, even if you didn’t ask (try a query on sunsets).

Page acknowledged in the 2013 letter that “in many ways, we’re a million miles away” from that perfect search engine — “one that gets you just the right information at the exact moment you need it with almost no effort.” In the 2007 Founders Letter, they explain: “To do a perfect job, you would need to understand all the world’s information, and the precise meaning of every query.”

Read More @ TheNewAtlantis.com