📢 Stop Scope Drift: Join our AI-Powered Project Alignment Webinar 🤖
Projects / AI Innovation Project

VeriHealth AI: Real-Time Medical Misinformation Detection System

Kick-off: April 30, 2026


Featured Image

The problem

The digital health ecosystem is increasingly saturated with misleading and unverified medical information. With the rise of generative AI and short-form viral content, health-related claims spread faster than they can be validated.

Critical signals exist across multiple domains:

  • Social media platforms (TikTok, X, Instagram) where claims originate and go viral.
  • Scientific literature (peer-reviewed journals, PubMed).
  • Public health institutions (CDC, WHO, NIH).
  • Health guidelines and consensus reports.

However, these sources remain disconnected, making it extremely difficult to:

  • Verify medical claims in real time.
  • Distinguish between credible and misleading health information.
  • Provide transparent, evidence-backed responses at scale.
  • Track how misinformation evolves across platforms.

As a result:

  • Harmful health narratives spread unchecked.
  • Platforms lack scalable verification mechanisms.
  • Users make decisions based on incomplete or false information.
  • Trust in digital health information continues to decline.

The project goals

This project proposes building VeriHealth AI, a real-time medical misinformation detection system designed to connect viral health claims with validated scientific consensus.

The solution focuses on constructing a structured claim-to-evidence dataset and enabling AI-powered verification through Retrieval-Augmented Generation (RAG).

Key components include:

  • Collecting viral medical claims from social media platforms.
  • Aggregating scientific evidence from trusted public sources (PubMed, CDC, WHO, NIH).
  • Designing a structured claim-to-evidence mapping framework.
  • Developing a RAG-based fact-verification engine.
  • Building a misinformation detection API for real-time use.
  • Defining verification standards and traceability guidelines.

As part of this challenge, the system must demonstrate the ability to:

  • Link unstructured social media claims to authoritative scientific evidence.
  • Align multiple sources into a consistent verification framework.
  • Classify claims based on accuracy, ambiguity, and risk level.
  • Retrieve relevant, high-quality supporting evidence.
  • Provide transparent traceability between claims and sources.
  • Handle noisy, ambiguous, or partially true claims.
  • Deliver real-time responses through an API interface.

Impact of the Problem

Digital Platforms & Social Media

  • Scalable detection of harmful medical misinformation.
  • Improved content moderation support with explainable AI.
  • Reduced the spread of misleading health narratives.

Public Health Organizations

  • Faster identification of emerging misinformation trends.
  • Data-driven communication strategies.
  • Stronger ability to respond to public health risks.

Researchers & AI Developers

  • Access to high-quality, structured verification datasets.
  • Foundation for building trustworthy AI systems.
  • Acceleration of research in AI safety and fact-checking.

General Public

  • Access to reliable, evidence-based health information.
  • Increased trust in digital content.
  • Reduced exposure to harmful or misleading advice.

Real-World Impact

  • Reduction in the spread of harmful medical misinformation.
  • Improved public health awareness and decision-making.
  • Stronger alignment between digital content and scientific consensus.
  • Advancement of transparent and trustworthy AI systems.

Timeline

1

Sprint 1: Data Discovery & Collection Setup (Weeks 1–2)

  • Establishing the data acquisition pipelines for viral medical claims and trusted scientific sources.

2

Sprint 2: Dataset Structuring & Evidence Mapping (Weeks 3–4)
Building the structured dataset and defining workflows to map claims to validated evidence.

3

Sprint 3: Verification Intelligence Layer (Weeks 5–6)

  • Developing the RAG-based verification engine and claim classification mechanisms.

4

Sprint 4: API Development & Final Delivery (Weeks 7–8)

  • Delivering the real-time misinformation detection API, validating system performance, and finalizing documentation.

**More details will be shared with the designated team.

First Omdena Project?

Join the Omdena community to make a real-world impact and develop your career

Build a global network and get mentoring support

Earn money through paid gigs and access many more opportunities



Your Benefits

Address a significant real-world problem with your skills

Get hired at top companies by building your Omdena project portfolio (via certificates, references, etc.)

Access paid projects, speaking gigs, and writing opportunities



Requirements

Good English

A very good grasp in computer science and/or mathematics

Good understanding of AI/NLP, Web Scraping and/or Machine Learning



Omdena



Application Form
Thumbnail Image
Medi-Triage Core: AI for Symptom Detection and Triage Support - Omdena
Thumbnail Image
Lights, Camera, AI: AI-Powered Digital Likeness in Films using Deepfake Technology - Omdena

Become an Omdena Collaborator

media card
Visit the Omdena Collaborator Dashboard Learn More