Microsoft has already been heavily involved with the ongoing development of artificial intelligence. The tech giant has already invested an initial sum of $1 billion into OpenAI (the Sam Altman-led creator of ChatGPT).
It has now indicated an intention to invest an even greater deal going forwards (as part of Microsoft’s ongoing commitment to expansion – it is now the world’s most valuable company, with a valuation of $3 trillion). The collaboration with ChatGPT is especially interesting considering the fact they also recently opened a dedicated office in London, meaning the two are likely to work very closely in the future.
A number of key Microsoft staff are also involved with this deal. Head of AI at Microsoft Mustafa Suleyman is a key figure leading the project, stating in a recent interview that “Microsoft AI plans to make a significant, long-term investment in the region [London] as we begin hiring the best AI scientists and engineers into this new AI hub”.
Suleyman had a wealth of AI-adjacent experience prior to joining Microsoft, including co-founding start-up DeepMind, which was later acquired by Google in 2014. During this period he collaborated with Jordan Hoffmann, an AI-focused engineering expert who, it has been announced, will sit at the helm of the new London office.
There are a number of reasons for this move. First and foremost, the ongoing growth of AI as an industry (see the increasing popularity of products like ChatGPT) means that a greater number of individuals working to release the ‘next big tool’ is simply inevitable.
Why London specifically, though? Suleyman said in his press conference that “there is an enormous pool of AI talent and expertise in the UK,” which aligns closely with the UK’s ongoing drive to encourage more graduates and job roles in STEM fields.
Furthermore, the UK has taken an active interest in the regulation of AI (a hugely contentious area of debate which consistently splits voters, politicians, and lawyers). While this may seem unattractive to tech companies which notoriously prioritise rapid innovation, Suleyman has said that the UK’s ‘safety-first’ approach to AI is in fact a positive factor from Microsoft’s perspective. This may be for a number of reasons (which we can discuss in some detail below).
Aspiring lawyers can mine a huge number of relevant points from this story in their applications. Whether aspiring solicitors going for vacation schemes, training contracts, etc – or barristers aiming for pupillages – the relevance of AI in terms of how the law interacts with pressing current affairs stories is hard to ignore.
It’s unlikely in practice that you’ll encounter any client who hasn’t at least begun to explore how they can use AI tools (if not already implemented). Some firms have already introduced their own in-house AI tools, in fact – Allen & Overy made headlines a few months ago for the introduction of their ‘Harvey’ chatbot, which allows lawyers to shave time off mundane tasks.
As a result, AI has become a common talking point on both application forms and within interviews – though it is important to anchor your points into tangible stories like these rather than discussing them purely in the abstract (which may come across as ‘waffle’).
There are a number of legal topics which are likely to arise based on a story like this.
First off, there are the obvious issues of AI regulation in regard to data privacy laws. This is a huge practice area for a number of law firms especially, with Chambers rankings showing that Band 1 outfits include Magic Circle heavyweights like Linklaters and more boutique offerings like Bird & Bird.
Ongoing debates and points of discussion include whose data AI training models are allowed to access – and applying outdated laws to these recently formed avenues of data collection (often resulting in legal ‘grey areas’ which are difficult to confidently advise clients on).
One recent story suggested that some AI companies are using transcriptions of YouTube videos in order to train their models, which is questionable in terms of legality.
Naturally, this also means that intellectual property rights are involved. Models being trained on the copyrighted works of other individuals are potentially committing copyright violations all the time – as a result, many artists have protested and attempted to pull their music from major streaming services/platforms over the past few months. Furthermore, can companies like Microsoft (or the people using their AI tools) attempt to copyright works which the tools themselves have created?
Many of these issues are also highly cross-jurisdictional. Rapidly evolving AI tools have meant lawmakers around the world scrambling to create legal frameworks to manage them within, and the outcomes have often been varied between different regions.
For example the UK and US have often been seen to take very different approaches in the last few months. For a company like Microsoft, which has significant operations in both, meaning data flowing back and forth between engineers attempting to innovate in different countries at the same time, how can you address the legal hurdles that might exist at one end of an email but not another?
You might even want to discuss areas like employment rights (with employment law being a major practice area for many firms), since the growth of AI has increasingly caused tension in this area. Many employees want increased security amongst an impending ‘threat’, whereas corporations will be asking how to effectively manage their budgets when balancing these competing interests.
In short, Microsoft’s latest London expansion can be used as a useful avenue/starting point through which aspiring lawyers are able to initiate powerful conversations around the interaction between AI and the law within the context of their application processes. This is a dynamic area of debate which firms and chambers are sure to be very interested in exploring more in the future – and so their next intake will do well to demonstrate their knowledge.
Loading More Content