Want to get Elastic certified? Find out when the next Elasticsearch Engineer training is running!
Elasticsearch is packed with new features to help you build the best search solutions for your use case. Dive into our sample notebooks to learn more, start a free cloud trial, or try Elastic on your local machine now.
In this article, we'll cover the following topics:
- Using the Elastic Web Crawler to crawl job listings and index them to Elastic Cloud. Shoutout to my colleague Jeff Vestal for showing me how!
- Processing job listings with GPT-4o using the Elastic Azure OpenAI Inference Endpoint as part of an ingest pipeline.
- Embedding resumes and processing outputs with the
semantic_text
workflow. - Performing a double-layered hybrid search to find the most suitable jobs based on your resume.
Theoretical use case: Using LLMs and semantic_text to match resumes to jobs
Here's an idea for a use-case. Say I'm a HR department at a company like Elastic, and I've got a few job openings and a talent pool of resumes. I might want to make my job easier by automatically matching resumes in my talent pool to my available openings. I implemented this using the Elastic Platform, and put my old resume into it.
These are the job openings apparently most relevant to my resume:
You know what, they're unexpectedly good picks. The first pick sounds very similar to what I've been doing for the past couple of months (it's actually a little eerie), and the second and third choices probably derive from my resume being stuffed with search and ML usecases.
Let's dive into how this was done!

The double hybrid search implemented in this article. Resumes and job descriptions are turned into two types of data using LLMs - A set of core competencies and requirements respectively, which are matched. Simultaneously, resumes are matched with ideal candidate resumes.
Prequisites
You will need an Elastic Cloud deployment and an Azure OpenAI deployment to follow along with this notebook. For more details, refer to this readme. If you are following along on your personal computer, ensure that docker desktop is installed before proceeding!
Here's what you might see upon a successful install, if you are on a Linux system:
Scraping Elastic's job listings using Elastic crawler
The first thing to do is install the Elastic crawler. Create a new project folder, cd into it, and run the following commands to clone the repo, build the docker image, and run it.
Once that's done, go to crawler/config/
and create a new file called: elastic-job.yml
. Paste in the following snippet, and fill in your Elastic Cloud endpoint and API key. Change the output_index
setting if you like. That's the index where the crawled web content will be stored. I've set it to elastic-job
.
Now copy elastic-job.yml
into your docker container.
Validate the domain (The target for our webscrape):
You should get back this message:
With that, we are good to go. Start the crawl!
If all goes well, you should see 104 job descriptions in your elastic-job
index on Kibana. Nice!

Elastic job openings as of 09/13/24, scraped content from https://jobs.elastic.co
Processing the job openings
Now that we have the job openings indexed, it's time to process them into a more useful form. Open up your Kibana Console, and create an inference endpoint for your Azure OpenAI LLM.
We can make use of this inference endpoint to create an ingestion pipeline containing LLM processing steps. Let's define that pipeline now:
We're using the LLM to create three new fields for our data.
- Requirements: This is a textual description of the core competencies and requirements for the job role in question. We're going to chunk and embed this. Later, the resume we pass as input will be processed into a set of core competencies. These core competencies will be matched with this field.
- Ideal Resume: This is the resume of a hypothetical "ideal candidate" for the position. We're also going to chunk and embed this. The resume we pass in will be matched with this Ideal Resume.
- Descriptor: This is a one sentence description of the job role and what it entails. This will allow us to quickly interpet the search results later on.
Each LLM processing step has three parts:
- A
script
processor which will build the prompt using the job description, which is stored in thebody
field. The prompt will be stored in its own field. - An
inference
processor which will run the LLM over the prompt, and store the output in another field. - A
remove processor
, which will delete the prompt field once LLM inference has concluded.
Once we define our processor, we'll need an embedding model. Navigate to Analytics -> Machine Learning -> Trained Models
and deploy elser_model_2_linux-x86_64
by clicking the triangular Deploy button.

Deploy Elser by clicking the triangle button. This will use ML nodes so ensure your Elastic Cloud deployment allows autoscaling of ML nodes.
Once the model is deployed, run the following command to create an inference endpoint called elser_v2
:
With our embedding model deployed, let's define a new index called elastic-job-requirements-semantic
. We're going to chunk and embed the requirements
and ideal_resume
fields, so set them to semantic_text
and set inference_id
to elser_v2
.
Once the setup is done, let's run a reindex operation to processs our job descriptions and index the results in elastic-job-requirements-semantic
. By setting size to 4, we ensure that processing will be done on batches of 4 documents at a time, which gives us some security in the event that the LLM API fails for whatever reason:
Execute the reindex, and watch as the processed docs fill up the elastic-job-requirements-semantic
index!

The new requirements field and its chunked embeddings in the elastic-job-requirements-semantic index.
The console will give you a task_id
, which you can use to check the status of the reindexing with this command:
Once the job is done, we can proceed to the final step!
Setting up resume search
For this step, we'll move to a python environment. In your project directory, create a .env
file and fill it in with these values:
Now add your resume to the directory. A .pdf
file works best. I'm going to refrain from posting my resume here because I am shy.
Run the following command to install dependencies (Elasticsearch and OpenAI):
And create a python script with two classes: LlamaIndexProcessor
calls the SimpleDirectoryReader
to load local documents, and the AzureOpenAIClient
provides a convenient way to call gpt-4o
.
Now it's time to search for jobs! Run this code to load your resume:
Let's generate the core competencies of your resume with the following prompt:
For my resume, this was the block of competencies generated:
Now, initialize the Python Elasticsearch client:
And let's define a query!
Searching for a job using hybrid search
It's time to make use of a double hybrid search - I call it double because we're going to do two hybrid searches on separate fields each:
There are two rrf.retriever
components. The first will embed the competencies and do a hybrid search over the requirements
field. The second will embed the resume itself, and do hybrid search on the ideal_resume
field. Run the search and let's see what we get!
The results were at the beginning of the post so replicating it here might be a bit odd.
And with that, we're done!