AI is about to completely change construction law

by Dr Stacy Sinclair, Partner

Since the release of OpenAI’s ChatGPT a year ago, AI has taken the world by storm, revolutionising how we interact with technology and sparking debate about the balance between technological advancements and ethics, privacy, and the changing nature of the human-computer interaction. Stacy Sinclair discusses the impact of AI on construction law and what the future may hold for our industry.

The Collins English Dictionary named “AI”, the abbreviation for artificial intelligence, as the most notable word of 2023.1

This comes as no surprise. Since the release of OpenAI’s generative AI platform ChatGPT to the public in November 2022, AI consistently has been the subject of debate, frequently making news headlines, and has even motivated governments to take action quickly. From Italy’s temporary ban of ChatGPT in April 2023 to the world’s first AI Safety Summit in the UK in November 2023, which brought together 29 governments and other key international institutions and major AI companies, AI certainly is a force to be reckoned with.

The title of this article shamelessly is derived from the title of Bill Gates’s recent Gates Notes blog “AI is about to completely change how we use computers”.2 In the blog, Gates explains how, within five years, there will no longer be different apps for different tasks. Rather, using a type of software called an “agent”, you will simply tell your device what to do in natural language. Agents, which have only recently become possible with the advancements in AI, are much smarter than “bots” and are set to revolutionise how we live, work, and interact with computers.

The same holds true for construction law. Advancements in AI are about to completely change how we practise construction law. AI stands at the intersection of innovation and construction law, and is set to transform our processes and practices, reshaping and challenging our traditional ways of working.  

This article builds on an article I wrote back in January 2019, “AI & Construction Law: an essential and an inevitable partnership”,3 an article which I consider still currently valid and very much relevant – though soon could be quite dated, particularly if Bill Gates’ prediction of agents holds true. 

In that article I concluded by stating that there is a significant amount of hype around AI and that if you are not utilising AI now, you certainly will be, to some degree, in the very near future – either by choice or by obligation. I was correct about our use of AI and wrong about the level of hype. AI certainly was on the Gartner Hype Cycle, but at that point, only in the “innovation trigger” stage. Little did I realise that four years later ChatGPT would explode into the market, skyrocketing generative AI to the top of the cycle, the “peak of inflated expectations”, meaning that it is projected to reach transformational benefit within two to five years.4

In other words, within two to five years, AI will completely change construction law.

In August 2023 I joined a panel at the 11th IBDiC International Congress in Brazil, hosted by the Brazilian Institute of Construction Law and the International Construction Law Association, to consider “Artificial Intelligence in Construction Law”. The following documents some of the topics I discussed, outlining how AI will transform construction law. 

AI is not new

Even though AI has been in existence for nearly 75 years, it was not until 2023 that its capabilities expanded to a level where it could become an integral part of daily life and industry and it became widely accessible and affordable to the general public, allowing for its potential to be fully utilised.

AI, a term coined by John McCarthy at the Dartmouth Summer Research Project on AI5, dates back to the 1950s, and is based on technical ideas from others long before that, including those of Alan Turing. The goal of the Dartmouth project was to build a machine that could do what the human brain could do and the participants naively did not think it would take that long.6 The same was said of the resolution of WWI.7

By the 1960s, AI’s use in law was already being considered and contemplated. Reed C Lawlor, a member of the State Bar of California, speculated that computers would one day become able to analyse and predict judicial decisions, by feeding a set of facts into a machine that has cases, rules of law, and reasoning rules stored in it.8

Following this, the use of AI in construction law was first seen in 1988. Professor Philip Capper and Professor Richard Susskind OBE, now the President of the Society for Computers and Law in the UK and Technology Adviser to the Lord Chief Justice of England and Wales, developed a rule-based expert system for the Latent Damages Act 1986.9 As Professor Susskind explained, it was essentially a hand-crafted AI system, a decision tree with over two million paths to assist lawyers through the not-so-easy to digest legislation.  

Since then there of course have been decades of research and development, particularly in computing power, and we have seen various breakthroughs over the years. Two examples include the moment the IBM supercomputer, Deep Blue, defeated grandmaster and then-world champion Garry Kasparov in chess in 1997, and when the computer system, Watson, won Jeopardy!in 2011 against human champions Brad Rutter and Ken Jennings.  

With the release of the generative AI ChatGPT in November 2022, it is only within the last year that we see generative AI exploding into the market and the general public becoming more aware of AI (either generative or otherwise) and indeed having free access to generative AI. By 1 November 2023, even King Charles III is addressing the development of advanced AI which he said is “no less important than the discovery of electricity”.10

For those not aware, ChatGPT is just one of a number of generative AI tools available. It is essentially a chatbot, built on a large language model, that can answer questions, tell stories, produce essays or presentations, generate and summarise text, write code, etc., in response to questions or prompts. ChatGPT is but one of a number of generative AI tools now on the market.  

AI is no longer an out of reach, inaccessible technology. It has crossed a threshold of completing tasks at a higher quality level than humans can, at a much faster rate.11 Now, industries, companies and (indeed) individuals, are grasping and grappling with what to do with it. The release of ChatGPT quickly brought AI to the foreground with sharp focus, despite its long-standing history.

Generative AI and the law

In terms of the legal industry, the use of generative AI first made recent headlines in February 2023 when a Columbian judge declared that he had used ChatGPT in his decision. 

Judge Padilla stated that he had used ChatGPT in his decision where he concluded that the entirety of a child’s medical expenses and transport costs should be paid by his medical plan as his parents could not afford them. Judge Padilla defended his use of the technology, suggesting it could make Colombia’s legal system more efficient. That said, he did not use ChatGPT alone. In the usual way, he had precedents from previous rulings to support his decision.12

Then in the spring of 2023, a New York lawyer who was representing an individual claiming against an airline in a personal injury case used ChatGPT for legal research. ChatGPT fabricated cases, perhaps because of the way the lawyer phrased his prompt. It appears the lawyer did not check the case citations generated by ChatGPT, and used the false cases blindly. However, the judge did check, and the lawyer was fined US$5,000.13

These are but two examples which demonstrate how generative AI is already being experimented with and utilised to a certain degree within the legal profession.

Whilst AI has tremendous, transformative potential and is rife with opportunities for construction law, equally there are critical considerations and challenges which users must be aware of and address. The technology is powerful but is still maturing. A few crucial points to note:

  • Privacy & confidentiality: If using the openly available platforms (i.e., ChatGPT), any data uploaded or inserted into the platform is not private. The data and information can be viewed and possibly even used by others. To be clear, platforms such as OpenAI’s ChatGPT are not secure, private, confidential environments.  Private platforms are available, but their privacy and security should be vetted before use.
  • Hallucinations: Generative AI platforms are still at a point where they have the potential to “hallucinate” or fabricate information.  As such, these platforms should be seen as providing access to a super-charged and super-intelligent assistant, whose outputs must be checked and verified by a human. 
  • Ethical considerations: As AI systems learn from historical data that might perpetuate systemic issues, there are real concerns encompassing bias and fairness. Ensuring that AI remains a tool for justice rather than a source of exacerbation requires careful calibration and oversight.
  • Not industry-specific and/or up to date: Depending on which AI system you are using, it may or may not be specific to law and/or designed as a legal tech tool. The underlying data which it is learning and drawing from may not necessarily be legal focused and/or up to date. Therefore it is imperative to understand the platform you are using. For example, ChatGPT 3.5 is only up to date as of January 2022 (previously September 2021), whilst ChatGPT 4.0, available for “Plus” users, is real-time, browsing the internet and analysing data as it processes.14

Given these considerations, and others, generative AI in legal practice must be used with caution.

As a result, the industry appears to be in a state of “consideration”. By this, I mean considering how to use generative AI, how to increase productivity and enhance services, and what processes and procedures can be automated or made more efficient with this technology. Companies globally are working with technology providers to develop and use generative AI in a secure, private and trusted environment, so that they can harness its power and start to reshape work processes fundamentally within their organisations.

As generative AI is now at the “peak of inflated expectations”, whilst it is not yet at a point which it can do everything we expect it to, we nevertheless are on the cusp of something big here: an AI-driven future. Perhaps in the short term, over the next few years, we will see incremental change and a lot of experimentation; but certainly in the next decade to come, we will see those who have embraced the inevitable change and those who have not and are struggling to keep up.

As we look to the AI-driven future of construction law, it is clear that contract analysis, tribunal selection processes, the predictability of dispute outcomes and the use of AI in decision making are set to see fundamental change.

Contract analysis and contract review

Advancements in AI in respect of contract analysis and contract review have already been in play well before generative AI was released in the market last year.

Various legal tech applications exist with the sole purpose of assisting with the productivity of a contract review: enhancing accuracy, speed and consistency, and/or automating and streamlining various tasks that would otherwise require substantial time and effort from legal professionals. Many of these applications are machine learning, so that the system learns what is good and what is bad, or rather, what is an acceptable level of risk to the company.

Some possible use cases include:

  • Automated data extraction and data entry: AI can scan contracts and extract key information such as parties' names, dates, clauses and obligations. This can eliminate the need for manual data entry and ensure accuracy in capturing crucial details. This may be particularly helpful in due diligence exercises and/or managing risk across large quantities of similar contracts, e.g. supply contracts or subcontracts.
  • Clause identification and analysis: AI can identify specific clauses within contracts, such as indemnity clauses, confidentiality clauses, termination clauses, etc., speeding up the process of locating important sections for review.
  • Monitoring contractual risk: This topic is very much in spotlight just now. Companies are keen to develop solutions which align to their internal policies and “playbooks”. AI has the potential to assess contractual risk by comparing clauses/contracts against predefined criteria, company policies and/or legal standards/regulations. If an AI system has been trained on a playbook, it can identify whether a clause is likely to be company-approved and/or whether it contravenes any liability limits or caps, along with which negotiation possibilities or variations on a clause may be acceptable. By highlighting clauses of concern and flagging which clauses need further human review, this helps prioritise review efforts and focus on high-risk issues.
  • Consistency checks: AI can detect inconsistencies or contradictions within a contract or between multiple contracts. This ensures that the terms and conditions are coherent and aligned throughout.
  • Cross-referencing: AI can cross-reference various sections of a contract to check for inconsistencies or conflicts. It can also cross-reference clauses with relevant legal precedents or regulations.
  • Workflow automation: AI and other technologies can help create and manage workflows for contract review. It can assign tasks, track progress and notify relevant parties when specific actions are required.

Some of the above examples are not necessarily available “out-of-the-box” just yet and may take time and money to develop and configure to specific requirements. 

AI can be a powerful tool and assistant in contract review and analysis, but it certainly does not entirely replace human expertise. Legal professionals still play a critical role in making final judgments, especially in complex or nuanced situations, which of course tends to be the case with construction contracts. 

Importantly, if you are in the market for a review platform, do not start with the tech - start with the outcomes. What do you need it to do? Define the outcomes first or you may end up paying for something no one uses.

Selecting tribunals and predicting outcomes

With regard to using AI to select tribunals or better predict the outcome of a dispute from court judges or arbitration tribunals, the underlying issue here is data: having access to data and surfacing data-driven insight, so that a party can make data-led decisions, either for selecting tribunals (e.g. in international arbitration) or predicting outcomes in court cases.

As with contract review platforms, there are various platforms or databases commercially available, driven by AI and natural language processing, which (if you have access) may assist. These platforms tend to be available via paid subscriptions, much like most legal research databases.

One point to note is that, like any other system, rubbish in equals rubbish out. If a platform only has limited data or contains data which is only confined to a particular time period or jurisdiction, it can be unhelpful. It is important to note what the purpose of the database is and what data it is drawing from.

The functionalities in this area include:

  • Data analysis and data comparison: analysing vast amounts of data from previous arbitration cases, legal precedents and tribunal compositions to help identify patterns and preferences in the selection of arbitrators for specific types of cases. 
  • Predictive analytics: using historical data to make predictions about the likely behaviour, rulings, and outcomes of different arbitrators or judges, to help lawyers and clients in choosing and/or aligning objectives and expectations.

An American example is Lex Machina. This was developed originally by Stanford University and acquired by LexisNexis in 2015. Lex Machina focuses on using AI and machine learning to analyse and extract information from court cases, enabling lawyers and legal teams to make more informed decisions and strategic choices in their litigation processes. Originally Lex Machina predicted the outcomes of patent disputes/IP litigation, more accurately than the specialist lawyers. Today it covers further areas of law and further US jurisdictions. The platform draws from past cases, analysing vast amounts of legal data, including court records, dockets, motions, pleadings and other case-related information. It then applies natural language processing and data analytics techniques to extract patterns, trends and insights from this data regarding judge behaviour, opposing counsel strategies, case outcomes, etc.

A UK example is Solomonic. Solomonic tracks the claims, proceedings and judgments of English court cases. Like Lex Machina, its database extracts information pertaining to the parties, the law firms representing those parties, the issues in the case, the judge, counsel, how each case was decided, the experts in the case, any positive or adverse comments on that expert in the decision, and other key data. This allows for a searchable database, providing insight into how particular judges decided on particular legal issues, and the chance that the particular judge is likely to find in favour of the claimant or the defendant, based on past cases.

Notably Solomonic also uses an experienced team of human legal specialists to sense check and add an extra dimension to the software’s results. Accordingly, Solomonic is combining big data analytics with human input. It is not necessarily predicting the outcome of the actual case at hand; rather, it is providing deep, data-led insight for the lawyers to have a better shot at doing so.

Other smaller start-ups have come and gone, but nevertheless those start-ups have shown that where the AI solution was asked to predict the outcome of an actual case, it did so better than the human lawyers.

In October 2017 software developed by a Cambridge start-up company CaseCrunch (which has since dissolved) predicted the outcomes of 775 PPI mis-selling claims. The software was asked to predict “yes or no” as to whether the financial ombudsman would succeed in the claim. The software had an accuracy of 86%. The 112 lawyers who analysed the same 775 claims had an average of 62.3%. CaseCrunch said that if the question is defined precisely, as was the case with the 775 PPI claims, “machines are able to compete with and sometimes outperform human lawyers”.

Notably the use of analytics to predict outcomes is not legal in all jurisdictions. Article 33 of the Justice Reform Act in French law prohibits judicial analytics: “The identity data of magistrates and members of the judiciary cannot be reused with the purpose or effect of evaluating, analysing, comparing or predicting their actual or alleged professional practices”.15

The use of AI in decision-making

The use of AI in decision-making is already underway.

At the outset of this article I referenced Judge Padilla making use of ChatGPT in his decision-making. The judge said, “by asking questions to the application, we do not stop being judges, thinking beings”.

This sentiment is mirrored by Sir Geoffrey Vos, Master of the Rolls and Head of Civil Justice in England and Wales, in his speech at the Law and Technology Conference to the Law Society of Scotland in June 2023. In that speech, Sir Vos predicted:16

  • AI was likely to make decisions on certain types of legal disputes in the future; and
  • whilst it is unlikely to replace human beings in judicial decision-making in complex, personal cases, it could provide solutions for certain types of civil disputes.

Sir Vos warned that while AI, such as ChatGPT, has the potential to be a valuable tool (and no doubt lawyers will not be able to stand aside from the use of generative AI given that clients will insist it being considered if appropriate) it is not infallible and should be used in conjunction with human judgment and expertise.  

Sir Vos noted that ChatGPT itself has said: “Ultimately, legal decision-making involves a range of factors beyond just predicting the outcome of a case, including strategic and ethical considerations and client goals”.

We perhaps are not yet at the point where an entire decision can or should be entirely produced by the machine alone, but watch this space. The challenge is how to use AI appropriately and safely given its current state of development.

The use of AI in adjudication decisions: TECSA AI Guidance Note

Litigation and arbitration are not the only forums for decision-making, and construction adjudication too will need to grapple with the ethical and technological challenges of AI.

The Technology and Construction Court Solicitors’ Association (“TECSA”) has recognised that AI and generative AI is here to stay and its use will only exponentially increase in the future. As its adjudicators must keep abreast of developments in AI, its potential benefits and uses, as well as its risks and limitations, TECSA recently released a “Guidance Note for TECSA Adjudicators on the use of Generative AI”.17

The short guidance note highlights various issues including data privacy and security, responsibility and supervision, transparency and ethical considerations and bias. Notably the guide reminds adjudicators that they remain personally responsible for their decisions, regardless of what tools are used in drafting and/or research (e.g. generative AI or otherwise).

AI has undeniable potential in aiding legal decision-making in construction law. However, a balanced and informed approach is essential to ensure justice, fairness and efficacy.


Not only did the Collins English Dictionary recognise the significance of AI in 2023, but so too did the Technology and Construction Court (TCC) in the UK. 2023 marked the 150th anniversary of the TCC and the year was filled with a number of importance conferences and celebrations. At the Gala Dinner on 2 November, which brought the anniversary to a close, Sir Rupert Jackson highlighted the importance of AI, stating: “we should be maximising its benefits, whilst taking into account its risks”. Hear, hear!

I asked ChatGPT to write a two-paragraph conclusion for me for this article. It drafted the following:

“In conclusion, the advancements in AI, particularly generative AI, are set to revolutionise the practice of construction law. From enhancing contract analysis and review to aiding in the selection of tribunals, predicting dispute outcomes, and assisting in decision-making, AI holds transformative potential. However, it is crucial to approach this technology with a balanced and informed perspective, considering issues of privacy, data security, ethical considerations, and the need for human oversight. 

As we stand on the cusp of an AI-driven future, it is essential for legal professionals to embrace this change and adapt to the evolving landscape. The use of AI in construction law is not about replacing human expertise but about leveraging technology to enhance efficiency, accuracy, and fairness. While the journey towards fully integrating AI into legal practice may be filled with challenges and uncertainties, the potential benefits it offers are undeniable. The future of construction law will be shaped by those who can effectively harness the power of AI while navigating its complexities and risks”.

A bit boring, but pretty good. I think I prefer either Sir Rupert Jackson’s succinct summary regarding AI, or this slightly longer one of mine:

“The landscape of AI is rapidly evolving. We need to embrace it as the enabler that it is, paying careful attention to its risks and challenges, or get left behind. Construction law is on the cusp of a complete transformation”.

Previous article | Next article