Human Native AI, a London-based startup, is addressing a critical issue in the AI industry by creating a marketplace for AI training data licensing deals. The company aims to bridge the gap between AI companies that require vast amounts of data for training their models and content creators who can provide this data, ensuring that all transactions are ethical and mutually beneficial.

James Smith / Human Native AI

A Solution to the AI Data Challenge

Founded by James Smith and Jack Galilee, Human Native AI emerged from Smith’s experience with Google’s DeepMind project, where the scarcity of quality data posed significant challenges. Smith, inspired by the Napster-era of music sharing, envisioned a more structured and legal framework for AI data training. This led to the creation of a marketplace where rights holders can upload their content for free and connect with AI companies for revenue share or subscription deals.

“Can we get to a better era? Can we make it easier to acquire content? Can we give creators some level of control and compensation?” Smith pondered, driving him to develop Human Native AI.

The company launched in April 2024 and is currently operating in beta. Despite its nascent stage, it has already secured a £2.8 million seed round led by LocalGlobe and Mercuri. The funds will be used to expand the team and further develop the platform. Smith’s ability to secure meetings with CEOs of long-established publishing companies highlights the strong demand for this service.

Addressing Market Needs and Ethical Considerations

Human Native AI’s approach offers a much-needed solution in the AI industry, which is currently grappling with ethical sourcing of data. Recent licensing deals between OpenAI and media giants like The Atlantic and Vox underline the industry’s shift towards securing legal access to data.

The startup’s marketplace ensures that rights holders can prepare, price, and protect their content effectively. By taking a cut of each deal and charging for its transaction and monitoring services, Human Native AI creates a sustainable business model that benefits all parties involved.

Smith emphasized the potential market size by mentioning that Sony Music recently sent cease and desist letters to 700 AI companies, underscoring the vast demand for legally sourced data. Human Native AI aims to cater to both large and small AI companies, democratizing access to quality training data and leveling the playing field.

Future Prospects and Industry Impact

From my point of view, Human Native AI is poised to play a crucial role in the evolving AI landscape. By offering a platform that ensures ethical data sourcing, the startup aligns with impending regulatory changes such as the European Union AI Act. This regulatory foresight could position Human Native AI as a vital partner for AI companies looking to comply with new laws.

Smith’s vision for the future includes using collected data to provide rights holders with insights on pricing their content. This analytical capability could further enhance the value proposition of Human Native AI’s platform, making it an indispensable tool for both content creators and AI developers.

Conclusion

Human Native AI is addressing a significant gap in the AI industry by providing a marketplace for ethical and legal data licensing deals. Its innovative approach not only facilitates the training of AI models with quality data but also ensures that content creators are fairly compensated. As AI regulations tighten, Human Native AI’s role in promoting responsible data use will become increasingly important. This startup is not just a marketplace; it’s a bridge between the AI of today and the responsible, regulated AI of the future.