The Center for Data Innovation spoke with Rachel O’Connell, founder & CEO at TrustElevate, an age-verification provider, based in Malta. O’Connell spoke about TrustElevate’s age verification solution, the problem of children entering adult spaces online, and the role of digital parenting.
Patrick Grady: Could you tell us a bit about what TrustElevate does?
Rachel O’Connell: We solve the unsolvable problem of knowing the ages of people under the age of 18 who are online and then, for very young children, we verify an assertion of parental responsibility for a child because a child can’t sign up to a privacy policy. So we solve a very specific problem for one-third of the population. And it’s applicable across business sectors. For example, we enable digital onboarding of child and teen bank accounts, where currently you have to physically go into a bank with your ID documents. We also enable streaming, gaming, and social media platforms to know the ages of their users and and to obtain consent from the verified holder of parental responsibility for the child.
Grady: What motivates children to declare false ages in order to enter online spaces? How do you see the responsibility of parents versus platforms to monitor this behavior?
O’Connell: All children are curious. I remember one of the first words I looked up in the dictionary was “sex.” I also looked up the F word just to see! It’s a natural part of being a kid trying to grow up and figure out things. I think that is healthy and not something you want to stifle.
Right now digital parenting is extremely difficult. If you look at the terms and conditions and the privacy policy that you, as a parent, sign up to on behalf of your child, it says, “we recognise your setting, we’re allowing you to set up this email for your child…”—who might be like eight years of age—“…but you absolve us from responsibility that the child will see content that is not age appropriate.” We want to push them to say now that it’s verified, you have to adjust and have more granular privacy settings relative to the child, rather than looking at rather than externalising responsibility.
Grady: What prevents or discourages companies from implementing robust age verification services?
O’Connell: I used to work within industry and when I was at Bebo, as a chief security officer, I met with all the policy people on a regular basis, we’d meet before government ministers, meetings, ministers and various multi-stakeholder group meetings. I think the role of public policy people is to be gatekeepers, and it would be absolutely naive to think that self regulation will work after 25 years of it really not achieving on that much. It’s the lack of regulation and it’s the lack of enforcement of the regulation. The fines and penalties can still be seen as a cost of doing business, rather than an incentive to change.
Grady: How does Trust Elevate’s age verification work, and what are Age Band Tokens?
O’Connell: For our verification to work, we need specific data points: The child’s first name, last name, date of birth provided by the parents, and the parent’s mobile number. With those data points, we are able to check against various authoritative data sources. Some of them are held by government. Some are mobile, open banking, or mobile operators’ application programming interfaces (APIs)—It differs depending on the market that we’re in. We hash the information on the device, and the data sources we’re checking against, so we are not holding any data. (It’s a decentralised zero data model.) We also want to make sure it’s as accessible and as easy to use as possible, so we’ve built an API, a web software development kit (SDK), and are building a gaming plugin for game developers. We’re also working with large companies, who are gatekeepers, to enable those age band tokens to be distributed through the data ecosystem.
If you’ve got an app, and it’s for under 12s, for example, you can send a query to our API to find out if the child is in the appropriate age band, and will get a yes or no response. It’s privacy-preserving as well because we’re not sharing huge amounts of data around the place. For many apps and games, there are lots of deep processes and sub-processes that also need informed and explicit consent. What we’re doing is surfacing those companies to the parents so that they can say yes or no for data processing and the same for teens.
Grady: How can safety-by-design principles inform the design of new, 3D, and immersive virtual spaces?
O’Connell: We’re working with company called Axon Park, who build metaversities. A lot of what’s really interesting is that a lot of the new developers have grown up as digital natives. They know what can happen online and they are highly motivated to create better experiences for children or young people. Right now, under an Innovate UK project, we developed this child rights impact assessment. Through AI, we can calibrate the the risk scores for each of the leaves in decision trees. Then you have a situation where people are educated and developers know that, if you do that, for this age band, this is the risk or these are the potential harms. So then you create teachable moments and that’s the operationalization of safety by design principles.