On the Internet, the tidal wave of misleading content is relentless. However, 2017 seems to be the year that giant tech conglomerates are finally working to make a change when it comes to the spread of false information. Over the course of the past month we’ve been monitoring “Project Owl”, the most official and far-reaching response to fake news that we’ve seen from Google so far.
What is Project Owl?
Project Owl is an overarching name for Google’s effort to combat fake news and eliminate problematic content from appearing in its search results. It can be broken down into three broad categories:
- Increasing user generated feedback on search results and tools
- Providing better standards for ranking and adjusting current ranking signals
- Increasing transparency
Google will meet these goals in various ways, but stating its commitment to making changes to its tools and overall company shows that its willing to work toward a more accurate and meaningful future for Search. We’ll cover the changes that everyday users might notice as well as the ramifications for employees in the Search Industry.
For Everyday Users
If you’re reading this post, you might be wondering what kinds of changes Project Owl will have on Google Search. No, your search results will not suddenly be delivered by owl post. In fact, many of Project Owl’s changes won’t be overtly noticeable on the user level. The biggest change is that Google wants your opinion and your help more than ever. Through Project Owl updates, users are encouraged to share their reactions to results and to help the overall search process through direct feedback tools.
A useful tool when in a rush, we’ve all had a misstep when it comes to autocomplete. Whether it’s sending a text with an inappropriate misspelling or having Google’s autocomplete bring up an embarrassing search when you’re sharing your screen with coworkers, it’s clear that the autocomplete function is useful but flawed. This is primarily because Google’s Autocomplete is generated algorithmically and reflects what people are searching for across the web. While autocomplete can lead to some funny results, at times it generates inaccurate or offensive results that can confuse or upset users. Google has finally recognized this as a problem, and introduced a new feedback tool. It works like this:
When you type your query into Google on your phone or laptop, auto suggestions appear just like before, but in the bottom right-hand corner there is a new feedback link that reads “Report inappropriate predictions.”
Clicking on this link opens a pop-up box where you can provide feedback on every answer that Google provided. You can choose as many “predictions” as you want, and label them as hateful, explicit, violent, or inappropriate for another reason. Your feedback won’t be implemented immediately, but Google will take this feedback into account for future crawls.
This detailed feedback feature shows Google’s new commitment to providing a satisfying blend of accurate information and user experience, instead of focusing on just one or the other.
Like Autocomplete, an algorithm generates Google’s Featured Snippets. This means that distasteful information can show up in the featured snippet for certain searches. The purpose of the featured snippet is to provide users with a quick, accurate answer at the top of the search results, without driving them to an individual website. When the information in this answer box is wrong, it can confuse students or complicate current event topics, and when the information is offensive, it can upset users or cause arguments. The new feedback feature implemented under Project Owl is intended to prevent that from happening.
Interestingly, we discussed providing feedback on Featured Snippets in our last post on Fake News, however in the time since that post went up in March, Google has updated its feedback box. Project Owl’s feedback box is not only more detailed, it also asks users to provide feedback on offensive or hateful content as well as inaccurate information. It works like this:
If your search generates a featured snippet, you can look to the bottom right hand corner of the box to find a link that says “Feedback”. Clicking this button will open a pop-up that asks for your feelings on the information provided by the featured snippet.
The old feedback pop-up only had four options instead of six. The options were “This is helpful”, “Something is missing”, “Something is wrong”, “This isn’t useful”. Notice that the old options only addressed inaccurate information, while the new feedback options address racist, dangerous, or violent content as well as misleading information. The updated feedback options show Project Owl’s goal of not only rejecting fake news, but of making sure hateful or offensive content isn’t shown as the “best answer” to searchers.
Aside from limiting the spread of misinformation, another arm of Project Owl’s endeavor is to increase transparency. Asking for feedback on its tools lets Google improve its features in a way that benefits the majority of its everyday users, and prove their opinions matter. Google is also improving transparency by publishing all new policies to their Help Center and providing more information to the public about the technology behind their search function. We encourage all users to check out the “About this result” link that appears under the featured snippet box, or the “Learn more” button that shows up for various Google tools, to gain more insight into how Google works and how you can use search engines more effectively.
Under Project Owl, Google has been making changes to adjust ranking signals and provide better standards for knowing which low-quality webpages to flag. On March 14th, 2017 Google released an updated version of their Search Quality Rater Guidelines to “provide more detailed examples of low-quality webpages for raters to appropriately flag, which can include misleading information, unexpected offensive results, hoaxes and unsupported conspiracy theories”. These new Search Quality Rater guidelines don’t determine individual page rankings but they do help Google identify areas that need improvement, and down the road this information helps inform the algorithm updates that work to demote low-quality content.
In fact, Google revealed they updated their ranking signals in Q1 2017 with the goal of decreasing low-quality content. Although Google stated that a commitment to transparency is a part of their Project Owl endeavor, this refers to their goal of being more open about their products with consumers, not providing the secrets of their algorithm to eager SEOs. The company has been vague on the exact algorithm changes they made, which is not unusual for them, but they have been clear that updates will continue under Project Owl in order to eliminate the spread of misinformation and harmful content. Google’s VP of Engineering Ben Gomes recently stated “We’ve adjusted our signals to help surface more authoritative pages and demote low-quality content”, but left it at that. Regardless of Google’s algorithm secrecy, it’s important to note that Google is taking the problematic content seriously enough to update its core algorithm. This is the update that’s been dubbed “Fred” by the SEO community, and you can read more about the update in detail here.
What’s Next for Google’s Project Owl?
Project Owl is not actually a project. By this, we mean that it isn’t a task with a start date and end date, it’s not a project that will one day be 100% complete. Project Owl is an official commitment by Google to continuously eliminate misinformation and improve its search engine and results. Google itself stated, “While our search results will never be perfect, we’re as committed as always to preserving your trust and to ensuring our products continue to be useful for everyone”. As long as there’s fake news on the Internet, Project Owl will be there to hunt it down.