AI-Powered Job Slotting Tool at Syngenta Group
Article
Uncategorized

In this conversation, Dominic O’Kelly, TR2050 Founder, and Anukriti Kumar from Syngenta Group discuss Sygenta’s AI pilot focused on job slotting, a process that has long challenged HR teams with its complexity, inconsistency and administrative load.

 

The pilot set out to test whether a secure, internally hosted large language model could improve efficiency and consistency in job slotting decisions across regions and business units, without removing the need for human judgement. Rather than replacing existing frameworks, the goal was to augment them, guiding decisions, reducing manual error and freeing HR to focus on more strategic work.

This conversation explores:

  • Why job slotting was prioritised as a test case for AI in HR
  • How Syngenta approached internal model development and risk controls
  • What surfaced when early assumptions were tested in practice
  • How the pilot connects to broader goals such as fairness, transparency and compliance
  • What is needed to scale this work across users and regions

Watch the conversation here.

 

Hypothesis:

Syngenta set out to explore whether an AI-powered job slotting tool could reduce inefficiencies, improve consistency, and free HR colleagues from repetitive tasks, without removing the need for human judgement. The underlying hypothesis was that a centrally designed, secure large language model (LLM) could support HR consultants in making more consistent and transparent job slotting decisions across regions and business units.

Rather than replacing existing frameworks, such as the Mercer IP methodology, the aim was to augment them, guiding decision-making, minimising manual errors, and enabling HR to focus on more strategic work. The pilot sought to determine whether AI could achieve this level of support reliably and at scale.

 

Methodology:

Syngenta took a centralised approach to AI adoption, using an internally hosted GPT-based model for the pilot. The project began with an HR hackathon 18–20 months prior, generating ideas that laid the groundwork for identifying suitable AI use cases. Job slotting was prioritised due to high volumes, user frustration, and inconsistency in documentation and outcomes.

For the pilot, all source material remained internal. Input data included job family descriptors, role samples, and organisational hierarchy structures. The backend architecture involved data indexing, chunking, and metadata tagging to contextualise the LLM’s responses. Strict business rules were implemented to protect sensitive documents and maintain auditability, such as preventing source document sharing and enabling uploads with feedback loops.

The testing process focused on proximal metrics: accuracy, completeness, relevance, clarity, ease of use, and consistency. A small group of HR testers reviewed 50 roles using the tool.

Feedback mechanisms allowed users to rate output quality and flag attempts at job inflation. An audit trail supported transparency and governance.

Syngenta also tested two assumptions:

  • That role profiles within teams or geographies were generally consistent.
  • That users consistently wanted consistency in job slotting decisions.

Both assumptions were challenged during the pilot.

 

Results:

The pilot showed strong initial success, with over 90% satisfaction among testers across dimensions such as accuracy, relevance, and ease of use. Reviewers reported time savings and appreciated having structured rationales documented, supporting better decision-making and reducing ambiguity.

However, the pilot also surfaced key learnings. Internal data alone was insufficient; without external benchmarks, the LLM risked reinforcing pre-existing inconsistencies. There was also a clear need to educate users about what AI tools can and cannot do. Misunderstanding capabilities created risk, particularly when tools influence outcomes tied to pay or legal compliance.

Syngenta defined success as a future state where AI tools can independently generate role descriptions, evaluate placement in hierarchy, and recommend market-aligned pricing, without manual effort. This vision aligns with broader organisational goals to reduce bias and improve fairness in line with evolving regulatory landscapes, such as the EU pay transparency directive.

Next steps include expanding the testing pool across geographies and business units, releasing the tool to the wider HR community, and eventually enabling line managers to use it directly. This progression will require further training, stronger audit trails, and clear governance around risk thresholds.

Share this insight with your network: