HealthcareDigital Transformation

Winning the First Wave of AI: The Rise of AI in Physician-Led Healthcare

By Jeremy Shiner, Founder and President of Myriad Systems

Administrative burdens are piling up in healthcare: clinical documentation, inefficient EHRs, scheduling challenges, and endless insurance follow-ups. According to the AMA, providers now spend up to two hours on admin tasks for every hour of patient care.

This strains providers, frustrates patients, and weakens the system. Meanwhile, reimbursements are shrinking, patient payments are declining, and costs continue to rise.

But for the first time in decades, a promising shift is underway, particularly for physician-led practices. The first wave of major AI adoption has arrived, offering both hope and hesitation. Providers now face a paradox: the fear of adopting AI, and the fear of being left behind. How we respond may define the future of independent care.

The AI paradox

Providers stand at a crossroads, with no clear path free of significant challenges.

  • Fear of adopting AI: With valid concerns around data security, regulatory compliance, integration hurdles, and a rush to market by sometimes overeager vendors, this fear is well founded.
  • Fear of being left behind: Just as legitimate, are concerns about losing a competitive edge, especially important with today’s increasingly unfavorable economic paradigm.

To protect the future of the healthcare industry, we must find a way to realize AI’s benefits while minimizing the risks of adoption.

Understanding the risks

Dismissing the risks or rewards of AI in healthcare is futile. I have spoken with providers so frustrated by this dual reality that they are considering leaving medicine altogether. But that is not the answer. When approached thoughtfully, the challenges give way to progress, and AI’s full potential can reshape how care is delivered and experienced.

Regulatory and compliance risks

Healthcare is tightly regulated by HIPAA, GDPR, and other global standards. Yet the label ā€œHIPAA-compliantā€ is often superficial and should not replace proper due diligence. A Business Associate Agreement (BAA) with the primary vendor is required but insufficient without considering downstream intermediaries.

Too often, API plug-ins claim compliance without fully understanding these requirements. When data is routed to a third-party Large Language Model (LLM), the engine behind most AI tools in healthcare, it often lands outside protected entities.

Most LLMs lack any mechanism to offer BAAs. With hundreds of potential data endpoints, including training processes, cloud vendors, and subcontractors, the risks compound quickly. These vulnerabilities are frequently overlooked by both providers and vendors. Identifying them early is essential to lead and win in the first wave of AI adoption.

Solution: Opt for vendors hosting LLMs directly

Vendors plugging directly into ungated LLMs via API connections intrinsically bypass these BAA requirements. Vendors who are established in the industry often have the technical capability, capital and knowledge to host instances of an LLM either locally or in internally managed HIPAA-compliant clouds. Direct APIs will pass PHI to unregulated and unprotected business entities, while solutions that eliminate unnecessary data endpoints reduce liability, simplify BAA compliance, and protect patient trust.

Before moving forward, ask:

  • Is patient data being processed and stored directly on servers managed by my end vendor, who is a business associate?
  • Is any data routed through external APIs, overseas vendors, or public large language models? Is data sold or used to train outside models?
  • If processed on vendor servers, are they cloud-based or physical? What HIPAA measures and compliance certifications do they carry?

Clinical and operational realities

AI adoption in healthcare faces real challenges within clinical workflows and daily operations. Providers worry about documentation accuracy, coding reliability, and the risk of AI-generated errors that could compromise care and increase liability.

Front office teams fear misbilling and inaccurate estimates, which damage patient trust and drive attrition. Integrating AI with existing systems is often difficult, especially in settings already strained by complex workflows and limited staff. Data security concerns add to the pressure.

Many clinical inaccuracies stem not from AI itself but from poor integration and lack of data context. The solution is not to avoid AI but to embed it into workflows so providers can easily oversee, verify, and stay compliant.

Solution: Partner with vendors fully integrated into your EHR

To reduce clinical errors, providers should partner with companies whose AI tools integrate deeply with their existing EHR systems. True integration means using existing patient data to support and enhance documentation, not generating content in isolation.

AI solutions should be designed for healthcare, ensuring recommendations are based on accurate, verifiable information. This approach protects clinical accuracy, lowers liability risk, and creates more reliable workflows from the start.

Before moving forward, ask:

  • Is this AI solution embedded directly within my EHR, or is it a bolt-on tool?
  • Does the AI solution pull structured data from patient records to inform documentation, or is it generating content independently?
  • Has the vendor taken measures to prevent the AI solution from hallucinating?

The logistics of implementation

Healthcare will always be a prime target for cyberattacks due to the intrinsic value of patient data. The first wave of AI adoption has been heavily reliant on third-party AI plug-ins, broadening these risks. These plug-ins and integrated API vendors largely fall outside of the stringent testing and requirements of ONC-Certification (Office of the National Coordinator for Health Information Technology).

In a world where breaches are inevitable, proper due diligence is non-negotiable. Providers and institutions must insist on end-to-end, audited security controls and demand full transparency on where data is held, and how it is shared.

Using third-party vendors that plug into your existing systems increases costs and adds operational barriers. These integrations also expand the surface area for cybersecurity attacks and require additional time for staff training and HIPAA compliance due diligence.

Solution: Opt for bundled services from established vendors

Healthcare practices should work with vendors that host their AI solutions in regulated, audited environments. Priority should go to ONC-certified Health IT firms familiar with strict government or clinical standards. Similarly, companies with experience in financial tech, like billing and payments, often follow banking regulations and bring strong cybersecurity practices and a solid understanding of HIPAA compliance.

These vendors usually handle PHI internally, reduce costs with bundled services, and offer reliable cyber protections. Partnering with a trusted or well-regarded company can make a significant difference.

Before moving forward, ask: Ā 

  • Is this vendor ONC-certified or subject to government-level cybersecurity standards?
  • How long will this take to implement, what is my cost in the meantime?
  • Does this vendor offer bundled pricing with other services I use or need?

The threat of inaction: missing the moment

If risk defines one side of the paradox, urgency defines the other. As physician-led practices face growing administrative burdens and outside influence, AI offers a rare opportunity to regain control, streamline payments, improve patient care, and free up time. Done right, adoption can drive efficiency and independence. Waiting may cost more than acting.

The risks of inaction include:

  • Losing competitive edge to corporate-backed systems and private equity
  • Falling short of modern patient expectations around access, personalization, and cost transparency
  • Wasting limited resources on manual, error-prone processes that automation could handle

The solution is straightforward:

Follow the steps outlined in this article, start slow, choose integrated and compliant vendors, and prioritize support and training. With a thoughtful strategy, AI becomes a force multiplier: reducing risk, enhancing care, and positioning your practice for long-term success.

Before moving forward, ask:

  • Which workflows in my practice are most vulnerable to inefficiency today?
  • Am I happy with the rate of patient and insurance payments? Could these be optimized with AI?
  • How much time do I spend on documenting and/or coding each encounter? What are the savings of eliminating this time investment by 30-60%?

Conclusion

Healthcare’s AI paradox, fearing both adoption and avoidance, is understandable but manageable. Providers can move from uncertainty to confidence by addressing risks and adopting tailored AI solutions through gradual, strategic steps.

AI adoption is not just a tech upgrade. It boosts provider efficiency and reshapes workflows. Providers can see more patients, make more accurate diagnoses, and deliver more effective treatments.

With a thoughtful approach, providers stand on the edge of a transformative era, with a real chance to regain control of the industry and improve national health.

Instead of fearing change, healthcare organizations can lead it, turning today’s concerns into tomorrow’s opportunities.

For questions or guidance on safe, effective AI adoption in private practice, feel free to get in touch or learn more here.

Author

Related Articles

Back to top button