The Center for Data Innovation spoke with Tom Iler, chief product officer of Catalyte, a company based in Baltimore that uses AI to find and train employees and provides software development services to Fortune 1000 firms. Iler discussed why resumes are outdated, how Catalyte quantifies the traits of a successful software developer, and how else firms can use data to make employment decisions, including for employee advancement.
Michael McLaughlin: Catalyte has been described as a company that “wants to put an end to the resume” by using AI. How exactly do you do this?
Tom Iler: We believe resumes are an outdated, and quite frankly ineffective, way to evaluate talent. Much of what is included in a resume tells you more about the applicant’s pedigree like their address, school attended, prior companies, and so on rather than their aptitude and ability to be a high performer in a given position.
From the very first thing you see, usually the applicant’s name, you are presented with information that has little to no value in predicting how someone will perform in a job. Beyond offering no real value, it creates the potential for bias for or against this person.
For 18 years, Catalyte has created and refined an artificial intelligence platform to screen applicants and select those with the natural ability and cognitive agility to become great software engineers. We intentionally avoid resume information during the hiring process. Anyone who wants to enter our training program and become a developer interacts with our AI platform through a structured screening tool, which takes approximately two hours to complete. This activity derives data from which we can compare applicants’ characteristics—how they solve problems, are they looking for other resources, do they change answers when new information is presented—with those of our developers on real world projects. That closed loop process gives us the information we need to determine if someone, regardless of their background or resume, has the potential to be a great developer.
McLaughlin: How much more effective is Catalyte’s approach compared to more traditional hiring methods?
Iler: Catalayte’s approach is more effective than traditional hiring methods in a number of ways. It allows us to consider innate ability when screening candidates. This expands the potential labor pool well beyond those who just have a specific set of pre-existing skills or experiences. In a highly competitive market for technical talent, this allows us to access a labor force that is hidden in plain sight.
With Catalyte’s screening, we get an objective, upfront view of an applicant’s aptitude and ability that’s validated against nearly two decades of project outcome data. This means the people coming into our training program already have the inclination to be great developers. We then ensure their success with our proprietary training and development process. At the end of it, we’re turning out full-stack developers who join teams that our clients report are three-times more productive than traditionally sourced teams.
Because our AI ignores some traditional requirements used to screen out candidates and focuses more on natural ability, it creates a more diverse workforce across many demographic factors including race, gender, age, education and socioeconomic status. Catalyte was founded on the idea that talent is equally distributed, but opportunity is not. We’ve proven that to be correct. When you take away biased methods of selecting talent, and base hiring decisions on skill and aptitude, you’ll get a workforce that more closely mirrors the communities in which you’re based.
McLaughlin: How do you quantify traits that could indicate someone could one day become a successful software developer? How do you validate this?
Iler: One of the benefits of using machine learning to predict a measurable, successful outcome, is that algorithms determine which traits are important and give them the appropriate weighting. In models where individuals choose what traits likely relate to high performance, there is a much greater opportunity to introduce bias.
Recently, there was an effort to analyze resumes and LinkedIn profiles for terms and phrases used to select recruiting targets. However, the whole effort was scrapped when they determined that the terms and phrases selected to train the algorithms were biased because the group that selected them were mostly men.
Using an unbiased machine learning approach that links traits and characteristics to objective, real-world outcomes provides an unbiased, results-oriented solution. Catalyte has 18 years of this objective, real-world data that validates what traits indicate who has the aptitude to be a great developer.
McLaughlin: Catalyte also plans on adding the ability to measure an individual’s emotional quotient to its assessment. Can you explain on how you plan to do that? How would that be relevant to an employer?
Iler: Catalyte’s models that screen applicants for potential fit in technology roles consider characteristics that include personal style, and preferences that can relate to an individual’s EQ. By utilizing that information in building our models, we can understand and utilize the relationship between those characteristics.
Our screening tool, which Catalyte developed over the course of a decade, collects and measures broad, multi-dimensional characteristics of an applicant to provide the AI models the best information to be able to predict their likely success as a software developer.
For example, certain screening questions, measurements and exercises can relate to characteristics of self-awareness. All of that data is made available to the algorithms that consider what factors best predict the success of a potential software developer.
McLaughlin: Besides identifying potential employees, how else could firms use data from hiring assessments? For example, could the data identify particular areas an employee may need more training, or even when an employee may leave a company?
Iler: Hiring assessments, or other AI or machine learning platforms, can serve a variety of purposes. The one major caveat is that they aren’t intended to be “catch all” platforms. You need to optimize the model or models for the outcomes you want. Otherwise, you risk collecting, analyzing and making decisions on data that isn’t really telling you anything.
For example, our models are optimized to find people with the aptitude to become great software engineers. Just because it’s effective in that capacity, doesn’t mean we could just use the same platform at a leading law firm and hire the best lawyers. Why? Because the characteristics necessary to predict whether someone will make a great developer might not be the same that make a great lawyer. You would have to discern other data elements are required and tune the models to adjust and learn for the new, desired outcome.
We’re considering adaptations and permutations of the model to help our developers advance up the career ladder. When are they ready for a promotion based on demonstrated skills? Was there a factor they showed that suggests they could be more successful in a project management or analyst role instead of a development position? How can we best form teams to optimize for skill and experience level across our whole organization?