Project ideas from Hacker News discussions.

Fighting Fire with Fire: Scalable Oral Exams

📝 Discussion Summary (Click to expand)

1. Preference for Traditional Human Assessments

Many favor reverting to in-person proctored written exams or human-led oral exams, citing proven scalability and effectiveness pre-AI.
"Written exams at a set time and place graded by a human grader." - jimbokun
"Oral exams scale fine. A TA makes $25 per hour, and an oral exam is going to take an hour at most." - bccdee

2. AI Exams Dehumanizing and Stressful

Critics decry AI voice agents as dystopian, condescending, and anxiety-inducing, undermining respect and fairness.
"Being interrogated by an AI voice app... I am so grateful I went to university in the before time." - A_Duck
"if I was a student, I just fundamentally don't want to be tested by an AI. ... it just doesn't feel respectful." - Wowfunhappy

3. Arms Race with Cheating and Declining University Value

AI exacerbates cheating on take-homes, but high tuition demands real human effort; degrees lose signal amid entitlement and inflation.
"Universities are rapidly becoming useless as a signal of knowledge and competency of their graduates." - jimbokun
"Students cheat when grades are more valuable than knowledge." - dvh


🚀 Project Ideas

Scalable Oral Exam Scheduler

Summary

  • A web platform that automates scheduling, video proctoring, and TA assignment for in-person or remote oral exams, solving logistical nightmares for professors with 36+ students.
  • Core value: Enables human-led oral exams at scale (e.g., parallel sessions via video rooms), detecting AI cheating by verifying project understanding in real-time conversations.

Details

Key Value
Target Audience University professors and TAs handling mid-to-large classes (30-200 students)
Core Feature AI-assisted scheduling (optimizes TA availability, student slots), integrated Zoom-like video with recording/transcription, rubric-based grading dashboard for TAs
Tech Stack React/Next.js frontend, Node.js backend, Twilio/Video SDK for calls, Google Calendar API, PostgreSQL, OpenAI for transcription/rubric suggestions
Difficulty Medium
Monetization Revenue-ready: Freemium ($0 for <50 students, $5/student/month for pro features)

Notes

  • Addresses "logistical nightmare" (baq, eaglefield) and preference for humans: "Get grilled by another human, not an AI" (xboxnolifes).
  • High utility for scaling oral exams praised in Europe (YakBizzarro, JanisErdmanis); sparks HN debates on reverting to pre-AI methods.

AI Exam Practice Simulator

Summary

  • Self-hosted tool for students to run unlimited practice oral/written exams with dynamic AI-generated questions based on course syllabus/project, solving lack of low-stakes prep and exam gaming fears.
  • Core value: Builds real understanding via repeated fresh questions (no leaks), with performance analytics to track improvement.

Details

Key Value
Target Audience Students and professors in AI-impacted courses (CS, business, engineering)
Core Feature Upload syllabus/project docs for question gen, voice/text modes, score history/dashboard, export transcripts for review
Tech Stack Streamlit/Gradio UI, LlamaIndex for doc ingestion, Grok/Claude APIs for question gen/grading, ElevenLabs for voice
Difficulty Low
Monetization Revenue-ready: $9/month per user or $99/course license for profs

Notes

  • Fulfills "unlimited practice runs" demand (alwa, ted_dunning, trjordan): "The more you practice, the better you get. That is... actually how learning is supposed to work."
  • Practical for voluntary trials (eaglefield); HN users would love as "excellent teaching tool" (jimbokun).

Read Later