AI Insights

Ace Your Next Interview: How AI-Powered Mock Interviews are Revolutionizing Job Preparation

May 3, 2024


article featured image

Try out our Job Interview Preparation Chatbot

Leave your email by filling out the form via the “Request Access to Demo” button.

Job interviews can be nerve-wracking and filled with uncertainty for many job seekers. Preparing for these interviews is no easy feat, as candidates must anticipate questions, practice responses, and exude confidence. In recent years, the rise of AI screening in hiring has added another challenge to the job hunt. Now, job seekers must not only master traditional interview skills but also understand the new technologies used in recruitment. By proactively preparing for interviews and staying up-to-date on the latest hiring trends, candidates can better navigate these obstacles and improve their chances of landing the job.

The Problem

People are rejected from job interviews for a variety of reasons, which vary based on unique circumstances and the employer’s expectations. At interviews, most candidates do not succeed due to:

Lack of confidence:

Candidates with great knowledge or exposure may struggle to share it due to nervousness, anxiety, or lack of preparation, which is the foundation of confidence and charisma.

Lack of interview experience:

Failing to explain fundamentals, basic principles, or previous projects/internships is a red flag.

Poor communication skills:

Lack of articulation or spoken English skills, talking too little or too much, can create future communication gaps.

Lack of attitude:

Humility, enthusiasm, and a desire to learn are essential at the beginning of a career. Aptitude without the right attitude is a no-go.

Lack of self-awareness:

It’s okay to have limited knowledge or exposure early in your career, but not knowing your strengths, values, interests, and why you fit a particular role is a red flag.

For practicing through mock interviews, candidates have very few options for AI-driven and cost-effective platforms, which can provide candidates/users accurate feedback based on their performance during the interview.

The Background

Job Interview

The rising popularity of conversational agents like Siri, Alexa, and Google Assistant has sparked interest in developing chatbots to automate tasks, including job interviews. Advances in natural language processing (NLP) and machine learning have made creating sophisticated, intelligent chatbots easier than ever. AI-powered interview chatbots help companies reduce recruitment costs while maintaining high-quality hires.

During the hiring process, there may be multiple rounds such as written tests and online exams to narrow down the pool of job applicants. Once a candidate successfully passes through each stage, they must then undergo a final face-to-face evaluation to determine if they are indeed suitable for the position.

The Goal

Omdena Hyderabad Local Chapter decided to build an LLM-based chatbot which can conduct mock interviews of the HR round for aspiring candidates. The team focused on those job roles where the HR round will be the crucial point in the interview. brought together over 60 collaborators from countries around the world. 

Goals and Objective

This project aimed to create an interactive chatbot for mock interviews to help users improve their skills through practice and exposure to various questions. The chatbot aimed to build candidates’ confidence by allowing low-pressure rehearsal before actual interviews, reducing anxiety. It can analyze responses and provides immediate feedback on answers, grammar, and other aspects to refine techniques. Available anytime, anywhere, the chatbot can offer a cost-effective, scalable solution for interview training without needing physical coaches or scheduling constraints.

Our Approach

The main objectives of the project are as follows:

  • Build a platform to help job aspirants attend mock interviews and identify areas of improvement.
  • Explore the possibilities of LLM-based apps in human development.
  • Build skill sets of participants in NLP, LLM-based apps, prompt engineering, and retrieval augmented generation (RAG).

Market Research Analysis

The objective of this task is to identify the top ten job roles which have high demand from employers in the past few years:

The most suitable three job positions were finalized with inputs from domain experts like HR professionals, experienced hiring managers, and internet articles. They are listed below:

  1. Customer Service Representative
  2. Sales and Marketing
    1. Sales Manager
    2. Marketing Manager
  3. Healthcare and Services
    1. Nurse
    2. Medical Assistant

Data Collection

Datasets

Data collection is a critical phase where relevant information is methodically gathered from various sources to achieve specific goals.

Subject matter experts optimized the interview process to determine key data elements to capture. Common threads across interview processes for different positions were identified, enabling a generic design and shared data elements to collect.

Thousands of questions and answers were sourced directly from websites using Selenium-based web scraping tools. Selenium is a robust tool for extracting extensive data from websites, automating the process and saving time and effort. The team contributed to obtaining multiple answers for individual questions.

Data Preprocessing

Data preprocessing is the process of transforming raw data into an understandable format. Preprocessing of data is mainly to check the data quality. The quality can be checked by the following:

Accuracy: To check whether the data entered is correct or not.

Completeness: To check whether the data is available.

Consistency: To check whether the same data is kept in all the places that do or do not match.

Believability: The data should be trustworthy.

Interpretability: The understandability of the data.

Collected questions and answers are checked manually for mismatches. Unwanted questions and answers which are not related to the topic are removed. Bad questions were replaced with good and meaningful questions.

Question Grouping

In this project, collaborators used the Natural Language Toolkit (NLTK) for text preprocessing. NLTK is the main Python library for working with human language data, providing interfaces to corpora, lexical assets, and text preprocessing tools. It’s free, open-source, and community-driven, but can be slow and difficult for production usage, with a steep learning curve. NLTK offers features like entity extraction, part-of-speech tagging, tokenization, and text classification.

The preprocessing involved removing unwanted columns, cleaning text by removing punctuation and converting to lowercase for readability and consistency. Cosine similarity was used to group similar questions based on context and meaning, using a threshold of 0.5 for efficient categorization and organization.

Categorizing for Interview Phase

BERT, which stands for “Bidirectional Encoder Representations from Transformers,” is a large language model developed by Google that captures intricate nuances in language for precise categorization. Its deployment in this project shows how advanced natural language processing can enhance efficiency and effectiveness.

Zero Shot classification, another natural language processing task, involves training a model on labeled examples to classify new examples from unseen classes. The Transformers library zero-shot-classification pipeline was used to infer these models.

After cleaning, grouping, and classifying questions, we found 460 for Customer Service, 331 for Sales and Marketing, and 1262 for Healthcare and Services, totaling 2053 combined datasets.

Data Preprocessing: Evaluation Specific

Following the preprocessing of questions and answers, each answer underwent individual assessment to categorize it as either good, average, or poor. The goal was to ensure that every question had multiple answers spanning the range of quality levels. If multiple answers were not available, additional answers were collected. Ratings to the answers were given just to test the prompt which is going to be used in the evaluation of candidate answers.

Interview Flow

Although teams varied in their approach to the interview flow, all teams shared two common elements: introductory questions at the beginning and summarizing/concluding at the end.

  1. Customer Service Representative
    1. Introduction
    2. General
    3. Behavioral
    4. Situational
    5. Conclusion
  2. Sales and Marketing
    1. Introduction
    2. Behavioral
    3. Technical
    4. Role Specific
    5. Conclusion
  3. Healthcare and Services
    1. Introduction
    2. Behavioral
    3. Communication
    4. Technical
    5. Conclusion

The selection process leading to the interview stage is elaborated upon within the question generation.

Mock Interview Flow

Data Storage

After standardizing job role columns and interview flow, the collected data was combined and stored in the VectorDB notebook. This notebook efficiently stored, managed, and indexed large amounts of high-dimensional vector data. Vector databases were gaining popularity for adding value to generative AI applications. We used ChromaDB, an open-source vector database, and HuggingFace embeddings for embedding text.

Question Generation

When the app loads, users input their name, job position, and a summary of their education and experience. The chatbot then generates relevant questions based on this information. The app loads the ChromaDB instance into memory, which includes a ‘collection’ of vector embeddings for the questions along with associated metadata (position and interview phase). The app uses semantic search and Retrieval Augmented Generation (RAG) to generate tailored questions for the candidate.

For a given job position, the app considers the interview phase, sequence, and number of questions to be generated. Initially, it uses Semantic Search to generate questions. If Semantic Search can’t generate all the required questions for an interview phase, RAG is used to fill in the gaps. The “Preset Question 1” and “Preset Question 2” columns serve as fallbacks when both Semantic Search and RAG are unable to generate suitable questions for a particular interview phase.

Speech to Text

Our HR-based chatbot interview requires verbal answers since strong communication is vital; thus, discouraging text input. Key points:

  • Prioritizing verbal interactions in HR chatbot interviews.
  • Encouraging spoken answers to promote better communication.
  • Deprioritizing typed replies within the scope of this project.

In the project, Streamlit mic recorder is used to record candidate answers, and the speech-to-text function to convert answers recorded from candidates to text and store it in a dictionary along with questions as keys, which will be displayed to the candidate after answering each question and further for evaluation after the end of the interview.

Evaluation Framework

In the evaluation framework, the team worked on developing an evaluation component to evaluate answers generated by the user. There were several tools available for the evaluation framework:

  • Huggingface
  • Mistral 7b model
  • Tokenizer
  • Bitsandbytes 4bit quantization
  • Pipeline
  • Langchain
  • Agents
  • Chains
  • LLM chain
  • Simple Sequential Chain
  • Sequential Chain
  • Chat prompt template

The team ultimately chose to use chains from the Langchain library. LangChain is a gateway to the dynamic field of Large Language Models, offering insights into how these models transform raw inputs into refined, human-like responses. The most basic and widely recognized chain is the LLMChain, which consists of a PromptTemplate, an OpenAI model (either a Large Language Model or a ChatModel), and an optional output parser.

The chain takes user input, passes it to the PromptTemplate to format it into a prompt, and then passes the formatted prompt to the LLM. The evaluation component assesses the candidate’s answer, rating it as poor, average, or good, and provides qualitative feedback.

The process begins by ingesting collected data into a vector database. After the interview, the evaluation iterates through the generated questions and the candidate’s answers.

The evaluation has three main stages:

Retrieval:

Embedding question-answer pairs, searching the vector database for similar questions, answers, and positions, and fetching three answers (one for each rating).

Augmentation:

Using few-shot learning by passing fetched data alongside question-answer pairs to make the model more contextually aware.

Leveraging LLM:

Generating the rating and qualitative feedback using the previously passed examples.

Figure

Chatbot UI Design

The User Interface (UI) facilitates human-computer interaction and communication within a device. In our project, we utilize the Streamlit web user interface. The UI consists of a sidebar menu and a main page with three tabs: Q&A, History, and Results.

Upon loading the application, the user must enter their username and job position in the Candidate Profile sidebar menu and click “Start Mock Interview.” This action loads the main page (right panel) with three tabs containing information about the interview.

In the Q&A tab, the application presents a summary question followed by six mock interview questions. After answering each question, the candidate can access the History tab to review their responses. Once all questions have been answered, the Results tab is triggered, providing a summary of the interview.

Application Deployment

We successfully deployed our newly developed application on a cloud server. Our dedicated team members rigorously tested the functionality and usability of this application in various locations around the globe. After a thorough examination, we can confirm that it is performing optimally with no reported issues.

To ensure a seamless user experience, we conducted extensive testing covering different scenarios, devices, and network environments. This robust testing phase allowed us to identify and rectify potential bugs, resulting in an efficient and reliable product ready for deployment.

In summary, we are confident that our application will meet expectations and deliver outstanding performance. Should any difficulties be encountered or assistance required, our support team is ready to help.

Demo

Demo Hr Interview Assistant Tool

To showcase the capabilities of our AI-powered mock interview application, we have created a live demo available at https://omdena-hr-interview-assistant.vercel.app/. This interactive demonstration highlights the key features and functionality of our solution, providing a firsthand experience of how it can enhance interview preparation.

When accessing the demo, users input their name, desired job position, and a brief summary of their education and experience. The intelligent chatbot then generates a series of relevant, industry-specific questions based on the provided profile. Users engage with the chatbot by providing verbal responses, simulating a realistic interview scenario.

Throughout the mock interview, the application’s advanced natural language processing capabilities analyze the user’s responses, offering real-time feedback and insights. Upon completion, users receive a comprehensive evaluation of their performance, identifying strengths and areas for improvement.

The demo serves as a testament to the potential of our AI-powered mock interview application in building confidence, refining interview skills, and ultimately supporting job seekers in their pursuit.

Key Outcomes

  • Sophisticated Chatbot Development:
    Developed a sophisticated chatbot that provides an engaging and effective mock interview experience for job seekers
  • Advanced NLP Capabilities:
    Incorporated advanced natural language processing capabilities and customizable question sets for a flexible interviewing solution
  • Realistic Interview Simulation:
    Simulated common interview questions and scenarios to help candidates prepare for real-world interviews
  • Streamlined Hiring Process:
    Designed the chatbot to assist recruiters by streamlining the hiring process and freeing up their time for other important tasks

Limitations

  • Limited Job Positions:
    The team was able to work on a limited number of job positions.
  • Static Question Generation:
    All questions are generated at the beginning using role and summary as input, and questions do not change according to the answers.
  • Simplified Evaluation:
    Evaluation is done by ratings assigned to the answers in the collected data.
  • Manual Profile Creation:
    Candidates have to explain their summary for question generation.

Despite limitations like a limited number of job positions, static question generation, and simplified evaluation, the team achieved impressive results. They developed a functional proof of concept showcasing the immense potential of AI-powered mock interviews. The project lays a solid foundation for future enhancements, demonstrating the team’s dedication and the solution’s promise.

Future Directions

  • Expand Application Scope: 
    Make the application available for a wider range of job roles to cater to diverse candidate needs
  • Enhance Interview Orchestration: 
    Develop an interview orchestrator that generates new questions based on responses to previous questions. Maintain a consistent and dynamic interview flow that adapts to the candidate’s performance.
  • Improve Answer Evaluation: 
    Evaluate answers using structured methods like the STAR (Situation, Task, Action, Result) technique. Provide more accurate and insightful ratings for candidate responses.
  • Streamline Candidate Profile Creation: 
    Allow candidates to upload their resume/CV instead of manually entering a summary. Automatically extract the candidate’s profile summary from the uploaded document.

Potential Industries for AI-Powered Mock Interview Applications

AI-powered mock interview applications have the potential to revolutionize the hiring process across a wide range of industries. Some of the key sectors that could benefit from this technology include:

  • Technology:
    AI-powered mock interviews help candidates prepare for technical interviews in the rapidly growing tech industry.
  • Finance:
    Mock interview apps provide realistic simulations of the rigorous interview processes in finance, helping candidates build confidence and refine responses.
  • Healthcare:
    As the healthcare industry evolves, AI-powered mock interviews help candidates prepare for industry-specific questions across various roles.
  • Retail:
    Mock interview applications help retail candidates practice common questions and develop strong communication and problem-solving skills.
  • Education:
    AI-powered mock interviews assist aspiring teachers in preparing for competitive interviews, showcasing their passion and ability to engage students.

By leveraging the power of AI-powered mock interview applications, companies across these and other industries can streamline their hiring processes, identify top talent, and ultimately build stronger, more successful teams.

Want to work with us too?

media card
A Success Story of Our Chatbot Revolutionizing Forest Restoration Efforts
media card
Revolutionizing Patient Care: How a Text-Based Chatbot is Transforming the Hospital Experience
media card
Leading Change in Crisis Management: Our DIMA Chatbot Success Story in Collaboration with DataCamp
media card
AI-Powered Chatbots Initiative to Enhance Mental Health Support