Rethinking the Behavioral Value Reinvestment Cycle — a Model That Helped Google Succeed (and Fail)

Memri
4 min readDec 1, 2020

22 years ago, in the midst of the dotcom boom, a visionary company was founded. It promised to make the new worldwide web accessible to everyone, putting users’ interests above all. However, after changing its revolutionary operating model to earn more revenue, it evolved into the Google we know today — an efficient, yet extremely privacy-unfriendly search engine that is always hungry for user data.

So how did we get there? What was the idea that helped Google succeed, and could it be used again for good? In this article, we are diving into the topics explored in Shoshana Zuboff’s 2019 book “The Age of Surveillance Capitalism”, thinking how Google’s original model can be fixed, and how we are trying to do that at Memri.

Google’s altruistic past

Despite all the flak Google takes for its business practices, the company does provide an excellent service to billions of people. Their web indexing and search has been the best in the world for over 20 years. This is partly due to Google being the first corporation to master something that social psychologist Shoshana Zuboff **describes as the “behavioral value reinvestment cycle”.

According to Zuboff, users produce “raw material” for Google in the form of behavioral data such as the number and pattern of search terms, spelling, dwell times, and click patterns, among other things. Back in the day, she argues, Google collected that behavioral data to reinvest it “in the improvement of the product or service”, or to improve search results and introduce new products (like spell check or translation, for instance). This is what Zuboff calls “the behavioral value reinvestment cycle”.

When the bubble burst

In other words, back in the 1990’s Google was driving a virtuous cycle where users received free search results, and the company reinvested their behavioral surplus into improving future searches. But once the dotcom bubble burst in 2000 and profits became a more immediate concern for startups, Google came up with a new use case for this behavior data: predictive advertising.

At the time, other search engines were primarily selling ads to be shown with specific search terms. Google, in turn, knew it could predict which ads the user would likely click on based on their data, and started showing paid placements accordingly.

With this superior ad serving algorithm, the Mountain View giant started offering much higher click-through rates than other search platforms. In the process, Google developed a financially-driven culture of pulling in even more behavioral surplus data from its users.

Today, the results of that incentive are more evident than ever. Google accumulates behavioral data at an unprecedented rate, drawing from more than just users’ web searches: it also collects content from emails, internet browsing history, online purchases, the Chrome browser, Maps, YouTube, Drive files, device location, and much more. All of this data is continuously feeding into the so-called individual’s unique User Profile Information (UPI). UPIs are essentially databases that allow the corporation to spam users with extremely accurate targeted ads, making surfing experience feel almost Orwellian-like at times.

It’s time to fix this model

What do we as users get for surrendering every scrap of our behavioral data to the Big Tech? More relevant search results and ads for things that we’re more likely to buy compared to randomly served products. While there is (some) value in that, the cost of our digital privacy seems significantly higher.

Information anxiety, addictive web surfing, buying things you don’t need come with package deal. It’s in the Big Tech’s core interests to “hook” you to the platform by all means, no matter the impact on your mental health and general wellbeing — and it can be truly ruinous.

According to a research conducted by Preventive Medicine Reports, screen time has been significantly associated with moderate or severe levels of depression in adults. Despite the value you get online, we think these costs are indeed to high — and not necessary.

That’s why we created Memri, a system that empowers you by putting the value of data into your hands, while making sure all that data stays totally private.

Memri is an open source project that enables you to take control of your data by pulling it from various apps and storing it in a safe, encrypted place. We seek to mimic the original Google formula, providing the user with a new, virtuous behavioral value reinvestment cycle.

Memri will only collect the data that users want us to collect, from sources including social media, texts, emails, files, and more. That data will be stored in an encrypted Personal Online Datastore, or POD, and no one but you (not even Memri) will have the access to it. Over time, we’ll start integrating machine learning algorithms to provide you with safe, helpful insights to improve your life (like analyzing your sleep and exercise data to find optimal patterns).

We are excited to invite users to the upcoming Memri beta launch to access these capabilities. To learn more about all the cool things that you can achieve with Memri software, please visit our blog.

--

--

Memri

Digital AI assistant to take full control of your #data in one secure place. Change the relationships with your device. Open-source (developers welcome!)