All law students will have heard in some lecture or seminar about the increased use of technology in the legal profession. Artificial Intelligence (AI) can review existing and new contracts, locate relevant information and recognise mistakes; all faster than humans do.
Defining what AI really means is too complex for what this article is trying to achieve, so for now we will simply define it as an imitation of the ability to draw conclusions and make decisions based on information given and prior experience.
Some examples of which algorithmic intelligence systems exist today are “robotic” vehicles, speech recognition, autonomous planning and scheduling, dealing with spam, logistics planning, and robotics and machine translation.
Now that we have very briefly explored what AI is, it is time to investigate the impact it’s having.
Do you want a training contract? Register your interest now in our next Training Contract and Vacation Scheme Conference 2018!Register your interest
I was fortunate enough to enquire Stephen Fry about the subject after his talk about his new book during the literary festival at the University of East Anglia. Of course, we could not delve deep into the subject before I was taken away by an assistant anxious to get the queue going.
Fry did have time to make an interesting comparison between humans giving life to artificial intelligence and the Greek god Prometheus stealing fire and giving it to humanity, thus enabling the progress of civilisation.
Are we also giving life to something that will eventually ‘outrun’ us and potentially become dangerous?
The late and great Stephen Hawking warned that “the development of full artificial intelligence could spell the end of the human race … Humans, who are limited by slow biological evolution, couldn’t compete, and would be superseded”.
However, Fry did comment that with AI being developed and used by companies all around the world, there is probably no stopping it. But let us not contemplate the idea that it can one day end all human life! Today, AI mostly serves as a helping hand to us.
Some examples of what is happening in the legal profession is ROSS, the legal research tool which extracts facts from over a billion documents in a second, and RAVN ACE, a computer platform able to convert unstructured data into structured information within hours.
Research from The Times and The Brief Premium predicts that two out of three of the UK’s leading legal practices will soon use technology like this. This is an impressive figure, as many lawyers have had reservations about using AI for some time now.
The CEO of ROSS Intelligence estimated that ROSS could save lawyers substantial time in the future. When AI is saving us both time and work, why is there still so much concern about further developing and using it – and is this concern valid?
There is concern that AI could repeat much of the prejudice that humans have about race and gender because of the way it’s built.
By example, software used to help courts predict criminality has skewed towards black men. This is because when AI is trained with biased data, it can make biased decisions.
If AI algorithms are being written with today’s gender bias in place, this may lead to businesses making inherently biased decisions. By example, according to research undertaken by UNESCO, women working within areas such as IT and engineering account for only 29% worldwide. As such, AI will take these statistics and translate them into biases within their algorithms, and might therefore dismiss female candidates as the profession is dominated by men.
Another example of the algorithm itself becoming prejudiced is where engineers train their facial-recognition algorithms with images of white males. It will then start making errors when it is used on non-white people.
With all the focus having been on whether machine learning will replace jobs, the consequences of biased AI has been overlooked by the public. Hopefully, the issue of biased AI can be solved if the engineers are aware of it when creating the algorithms.
In the legal profession, the main concern seems to revolve around the issues of accountability and liability.
The House of Commons Science and Technology Committee published a report in 2016 on Robotics and Artificial Intelligence which explores liability in relation to self-driving cars. Who would ultimately be responsible for the harm caused by those cars?
Flett and Wilson suggests that the people writing the algorithm must be liable for the outcome. It is, however, not that easy as the Law Society points out that this raises issues around the ratio of responsibility where there are multiple developers.
This may be solved by the report’s proposal to extend motor insurance to cover product liability and for motorists to rely on courts to apply the existing rules of product liability under the Consumer Protection Act; and negligence under the common law to determine who should be responsible.
It looks like artificial intelligence is becoming a challenge to the legal sector, but it may prove to have its advantages as well.
A barrister I have been corresponding with was kind enough to give me his point of view on this. He suggested that legal AI can be advantageous in less complex cases such as traffic offences.
Calculating fines by using more straightforward formulas in cases such as pleas for traffic offences could enable court proceedings to be more efficient. This is surely positive as time is money and, as we know, the government will jump at every opportunity to save money.
The issue that arises is one of fairness and whether this would give defendants proper access to justice. This barrister made a point that we should provide the person with an option to seek a human tribunal.
In my opinion, this would certainly aid with providing access to justice whilst still enabling court proceedings to become more efficient. Some universities have also been looking at how AI software can produce a draft judgement for judges to review in simple cases where facts are undisputed.
It will be interesting to see if this will be adopted by the courts as this would enable judges to spend more time on complex cases where human contribution and consideration are needed.
What about law firms then? There has been worry among us students whether technology will decrease our opportunities for training contracts.
I expect that AI, at least in the near future, will do the mundane tasks and that this will enable trainees to take on more creative assignments.
This idea was supported by trainee solicitor Mirwis who suggested in 2016 that junior lawyers would instead learn the business needs of clients earlier and be given greater responsibility in handling more complex tasks.
Speaking to a solicitor from Ashtons Legal, I have gained some insight into their use of AI and his opinion on the subject. He supports the idea that AI can be used to do the boring tasks that employers previously had to do. Using a software robot to help cleanse their data ensures efficiency and consistency.
As I have personally contributed on a case with hundreds of documents needing to be examined, I support his notion that people eventually get bored with such mundane tasks and will be more prone to make errors. As he so eloquently put it: “it is not about replacing our jobs but about helping us to perform tasks better or more quickly, allowing us to add value in other ways or use our skills on more rewarding work”.
Regardless of your view on the development of artificial intelligence, it is most likely unavoidable. As Fry pointed out, it would probably be a better idea to prepare for it rather than to try and stop it.
However, it is not just about whether we can stop this development, but whether we should. The lawyers I have spoken to seem to favour the idea of progressing with it and using AI in the form of document automation, data analysis, automated software, and so on.
If the issues around bias and accountability is solved, AI can prove to be an excellent tool in the legal profession. It is difficult to predict what will happen over the next five or ten years, but it is obvious that the development of AI will continue regardless of these concerns. We will just have to wait and see how it unfolds.
>> Interested in the future of law? These articles should definitely be on your reading list:
Published: 04/04/18 Author: Kristin Klungtveit
Loading More Content