Home PublicationsData Innovators 5 Q’s for Ben Mones, CEO of Fama.io

5 Q’s for Ben Mones, CEO of Fama.io

by Eline Chivot
by
Ben Mones, CEO of Fama.io

The Center for Data Innovation spoke with Ben Mones, chief executive officer and co-founder of Fama.io, a U.S. firm that uses machine learning and natural language processing to screen potential hires and current employees’ digital presence for indicators of risk, culture fit, and performance. Mones discussed how Fama’s software uses publicly available online information to help recruiters detect and fight against problematic behaviors in the workplace such as harassment and bullying.

Eline Chivot: Which problems in hiring and management led you to create Fama? Are organizations increasingly using solutions like yours, and why?

Ben Mones: Fama is a talent screening software that helps businesses identify problematic behavior in new hires by analyzing publicly available online information. Rather than provide a score, or an assessment on a candidate, Fama helps identify online behaviors such as intolerance, threats, and harassment. 

Companies care about identifying these sorts of behaviors in the hiring process because many have experienced that these “toxic behaviors” can have a negative impact on productivity in the workplace. Team performance drops by 40 percent when workers are distracted by bullying or toxic behavior, and individual performance drops by 60 percent in toxic work environments. 

Companies also recognize the financial importance of their reputations, which can easily be marred by an executive exhibiting racist or intolerant behavior. Business leaders know that strong corporate reputations lead to a 2.5x better stock performance compared to the overall market. 

Toxic behaviors can be very difficult to identify in the candidate screening process without the proper tools. These are the problems in hiring and management that we are solving.

These are the main reasons we see companies turning to social media and web screening in today’s market. Regarding adoption over time, we’ve seen a rapid expansion in usage over the past five years, with the largest periods of growth coming since 2018. As of June 2020, Fama has customers in 18 countries and over 600 clients; these are exponential increases compared to 2016. 

Chivot: Which different types of sources does Fama leverage to collect data and inform organizations? What are some of the behaviors that can be detected?

Mones: Fama offers a range of products that cover a diversity of data sources. Clients tend to increase the scope of data sources included in each search based on the seniority of the role that is being filled. For example, screening for entry-level employment may only include a review of publicly available social media and web content. More senior positions might include in-depth analysis of litigation history, sanction information, or even college newspapers. 

As mentioned, Fama does not make any recommendations or assessments about individuals. Clients ultimately define which behaviors they want to screen for within the software application, creating a highly configurable environment that reflects each individual company’s screening requirements, or to think more broadly, their cultural criteria. No two Fama set-ups are the same; hiring managers and legal departments ultimately determine what is “toxic” for their organization. 

Our AI is trained to identify problematic behaviors online such as intolerance, harassment, threats, violence, and illegal drug use. The software solution intelligently escalates these behaviors in a web-based dashboard for user review, presenting the behavior exactly as it appears online. 

The most sought after insights are intolerance, threats, and harassment. These are the behaviors that are seemingly universally accepted as being critical to a company’s screening process. 

Chivot: One of Fama’s products is a predictive solution for recruiters. How does it work and how does it benefit organizations?

Mones: Fama no longer offers predictive solutions for recruiters—this was a solution that we tested internally but never made commercially available. There seems to be a general interest in using predictive scoring to determine candidate fitness, i.e., assigning a number to an individual, or some type of ranking. Ultimately however, we could not get comfortable with the potential for disparate impact on certain protected classes of individuals, and the fact that you can’t explain a machine learning-based score to a candidate. There’s no way for a candidate to contest or explain their score if there is no rules-based formula for how a third party arrived at that number. This creates a series of privacy, legal, and general use challenges.

Chivot: Some job candidates and employees might find the idea of having their social media posts analyzed somewhat unsettling, and have concerns about algorithmic bias. How do you address this? 

Mones: We acknowledge the fears that individuals might have regarding social media and web screening, especially after widespread and ongoing abuses of personal data. That’s why we take a privacy-first approach at Fama, to ensure three aspects.

First, any report initiated across our systems requires the informed consent of an individual before that report is initiated. Further, if the report is used in any pre-employment or context related to FCRA (the U.S. Fair Credit Reporting Act) or GDPR (the EU General Data Protection Regulation), any decisions based on that report must be discussed with the consumer, and the consumer must have an opportunity to contest or explain the results.

Second, we only review publicly available information about a subject. We will never use surreptitious methods to uncover private information that was not meant to be shared publicly.

Third, we backtest our algorithms on a regular basis and use human auditing of the outputs to ensure that we are consistent in our labeling of online content and that we are not creating a disparate impact on a certain group of individuals. We frequently make, and have made, changes to these algorithms based on our findings.

My advice to job candidates who may worry about the use of such tools is simple. The majority of people are not acting in an intolerant way online and have nothing to worry about. We don’t flag for behaviors such as alcohol or profanity—the picture of a beer will not prevent you from getting a job. For the few that are driving narratives of hate, intolerance, and violence online, we will enable our clients to hold them accountable for their actions.

Chivot: Why is AI useful in the field of human resources more broadly? How will technologies like AI continue changing hiring practices?

Mones: AI is useful in the field of human resources because the practice in general suffers from significant amounts of menial and repetitive work. For example, consider data collection in candidate sourcing, filtering resumes in talent screening, or managing the various steps in the interview process. These processes, when performed by humans, are expensive, time consuming, and inconsistent. AI is uniquely suited to help HR leaders spend less time and money collecting information, and more time taking action. We expect a cloud or AI renaissance for HR leaders between 2020 and 2023 similar to the one sales and marketing executives have experienced over the past decade.

AI will continue to change hiring practices by replacing repetitive and monotonous work. I doubt that we will see the “Waze” of HR AI, or the types of technology that will tell you exactly what to do about a candidate and when to do it. More likely we will see a rise of tools that bring humans to the precipice of action, driving more informed decision making rather than full process replacement. At the end of the day, HR leaders have intellectual and domain expertise that a machine may never have. One cannot teach the “human touch.”

You may also like

Show Buttons
Hide Buttons