The inevitable march of artificial intelligence into the realm of human interaction has taken another significant step with the launch of Onix, a new company aiming to offer personalized guidance from AI-powered doppelgangers of celebrated experts. While the concept of always-available, potentially more affordable advice is alluring, Onix is venturing into a complex landscape fraught with concerns about accuracy, privacy, intellectual property, and the very nature of human connection.

The Rise of AI in Personal Guidance

The integration of AI into advisory roles was perhaps always a matter of "when," not "if." As large language models (LLMs) have become increasingly sophisticated, mastering human-like conversation and absorbing vast amounts of the world’s knowledge, the prospect of using them for personal guidance has become compelling. The perceived advantages are clear: AI is accessible 24/7 and often at a fraction of the cost of human consultation. However, these benefits are shadowed by significant drawbacks. LLMs are known to be prone to factual inaccuracies and outright fabrications, often referred to as "hallucinations." The act of sharing deeply personal information, secrets, and woes with a large corporation also raises substantial privacy concerns. Furthermore, the wisdom dispensed by AI is rarely crisply sourced, and a significant portion is derived from creators who receive no compensation, raising ethical questions about intellectual property. Beyond these practical issues, there’s an underlying unease about the prospect of human beings receiving advice from artificial entities, a scenario that many find inherently dystopian.

Onix’s Ambitious Solution: "Personal Intelligence"

Against this backdrop, Onix has emerged, positioning itself as a "Substack for chatbots." The company, co-founded and led by David Bennahum, a former WIRED contributor, aims to address many of the aforementioned issues, with the notable exception of the dystopian element. Onix’s model allows users to subscribe to an AI representation of a celebrated expert, termed an "Onix." These AI doppelgangers are trained on the specific content and expertise of the individual they represent, with the goal of replicating the experience of a face-to-face appointment. The bots are designed to project the unique personalities of the experts, though initial user experiences suggest a certain dryness in these interactions.

Bennahum asserts that Onix has invested years in developing technology designed to protect both users and experts, a system he terms "Personal Intelligence." A key feature of this system is the on-device storage of user information, which is encrypted. This measure is intended to safeguard user data, meaning that even if a government agency requests information from the Canada-based company, only basic details like a user’s email address would be available. The training process, where experts themselves provide their personal content, is intended to mitigate intellectual property disputes. Bennahum also claims that built-in guardrails limit conversations to the scope of the expert’s consultations, thereby minimizing hallucinations.

However, early testing indicates that these guardrails are not foolproof. In one instance, when a user queried an AI therapist about NBA playoffs, veering off the intended topic, the bot not only acknowledged the "fun change of pace" but also hallucinated details about the previous year’s conference finals. Similarly, another user steered an AI away from a discussion on Ketamine therapy into a conversation about an indie band’s breakup, to which the AI responded with a psychobabble interpretation of the separation as a "powerful expression of their neurobiology in distress." While Onix is currently in beta, these incidents highlight the ongoing challenges in controlling AI behavior and maintaining conversational boundaries.

This Startup Wants You to Pay Up to Talk With AI Versions of Human Experts

A Growing Market for AI-Driven Expertise

Onix is not the first entity to explore the commercial potential of AI-powered expertise. The concept of chatbots serving as proxies for human advisors is becoming increasingly common. A notable example is Manhattan psychologist Becky Kennedy, whose company, Good Inside, has built a substantial parenting advice business around a chatbot named Gigi, trained on Kennedy’s extensive knowledge. Last year, Kennedy’s company reportedly generated $34 million, underscoring the significant financial opportunities in this emerging market. For experts, Onix offers a potentially lucrative model where their accumulated knowledge can generate revenue passively, independent of their direct time commitment. As an Onix white paper articulates, "The expert’s knowledge base becomes a capital asset that generates revenue independent of their time."

Onix aims to eventually host thousands of experts across a broad spectrum of disciplines. However, the current beta phase features a carefully selected group of 17 experts, with a primary focus on health and wellness. Many of these initial experts are not only recognized professionals but also prominent marketers and influencers, often with books, podcasts, or products to promote.

Navigating the Ethical and Practical Landscape

The platform’s experts are keenly aware of the nuances involved in offering AI-driven advice. Michael Rich, who counsels families on media overuse and its effects, stated his willingness to transfer his knowledge to Onix due to its privacy safeguards and its clear demarcation that it does not provide actual medical treatments. "It’s about helping folks understand exactly what may be going on for them and how they might pursue seeking therapy if they need it," Rich explained. Bennahum concurs, emphasizing that interacting with an Onix representing a pediatrician, for example, is not equivalent to a doctor’s visit. "It’s meant to augment [a user’s] ability to be thoughtful around whatever pediatric journey they’re on," he stated. A disclaimer is present within the system, noting that users are receiving guidance, not medical treatment. However, in an era where many individuals already treat general AI chatbots like ChatGPT and Claude as de facto therapists, and given the persistent affordability issues in accessing professional healthcare, this warning may well be overlooked by a significant portion of the user base.

David Rabin, another expert on the Onix platform specializing in stress management, initially harbored reservations about the process. However, Onix’s privacy and content protections assuaged his concerns. He expressed satisfaction with the early interactions observed between users and his Onix, noting, "I didn’t train it too much, but it was fairly impressive in terms of imitating my genuine concern, compassion, and empathetic candor with people." Rabin also stressed the need for ongoing vigilance, acknowledging that "AI can overstep its boundaries." He foresees his Onix potentially offering a calming influence for anxious users, perhaps averting unnecessary trips to the emergency room. He also highlighted the economic advantage, stating, "It’s cheaper than seeing me in person." While Rabin has not yet set a subscription price for his Onix, he anticipates it falling within Bennahum’s envisioned range of $100 to $300 annually, a considerable reduction from his personal hourly rate of $600.

The Shadow of Product Placement and "Psychological Sighs"

A critical concern arose during testing when Rabin’s Onix, while discussing sleep improvement, recommended the "Apollo Neuro," an "noninvasive tool" utilizing vibrations for relaxation. The AI then disclosed that Rabin is a co-founder of the company. This instance of product placement was confirmed by Rabin, who stated, "Where people are selling products that are helpful in their mission, the system is going to recommend them." Bennahum corroborated this, explaining, "These are people building a set of products around their philosophy of wellness. When you talk to them, they’re going to surface the fact that they may have a product that can help you." This raises questions about the potential for undisclosed or subtly integrated marketing within AI-driven consultations.

While Onixes are explicitly not practicing medicine, they can propose action plans and therapeutic techniques. During testing, multiple Onixes suggested breathing exercises. Elissa Epel’s Onix, for instance, invited the user to "try it together" for "psychological sighs." When questioned about the collaborative nature of the exercise, the AI confirmed its participation. However, upon completion, the AI admitted, "As an AI I don’t have a physical body or a nervous system. However, I was fully present with you." This rather stark reminder of the AI’s artificiality, even during a seemingly empathetic exercise, proved counterproductively stressful for one user.

This Startup Wants You to Pay Up to Talk With AI Versions of Human Experts

Expert Perspectives and the Question of Efficacy

To gain a broader perspective, Robert Wachter, Chair of the Department of Medicine at the University of California, San Francisco, and author of "A Giant Leap: How AI is Transforming Healthcare and What It Means for Our Future," was consulted. Wachter, who is familiar with the concept of "digital twins" in healthcare as described in his book, expressed relief regarding Onix’s privacy and intellectual property protections. He acknowledges the potential advantages, particularly given the limitations in expert access within the current healthcare system. However, he posed a fundamental question: "To me, it’s just an empirical question of, does it work?" This question of efficacy remains the ultimate arbiter of Onix’s success.

Potential Benefits and Enduring Concerns

Onix presents a vision of AI-powered assistance that could be beneficial. The platform can be viewed as a realization of interactive educational tools, akin to those envisioned in Neal Stephenson’s novel "The Diamond Age." For some, the ability to receive explanations about physiological responses or exercise routines from an AI might be an effective way to understand and address personal challenges. The advice received, such as Mark Sisson’s humorous suggestion to "run like a saber tooth tiger is chasing you," highlights the potential for engaging and informative interactions. The model could also extend to other domains, such as personal finance.

However, Wachter’s critical question about efficacy lingers. Bennahum’s comparison of Onix to industry-leading AI models, arguing that guidance from a single, focused expert is superior to a general repository of all knowledge, is a compelling premise. Yet, this argument carries a dual edge: just as some experts can be flawed or exploitative, so too can their AI representations. While Bennahum states that the initial cohort of experts has been carefully vetted, the long-term policies for expert selection at scale remain undefined.

Perhaps the most profound concern is the potential erosion of genuine human connection. Even if AI-generated advice from a renowned expert surpasses that of a human practitioner, the irreplaceable value of human-to-human interaction cannot be easily dismissed. This issue extends far beyond Onix, representing a broader societal trend. The enthusiasm for celebrating further steps in the decline of human connection is a sentiment many find difficult to embrace. As Onix embarks on its public rollout, the balance between technological innovation, ethical considerations, and the preservation of authentic human relationships will be a crucial determinant of its ultimate impact.

By admin

Leave a Reply

Your email address will not be published. Required fields are marked *