Machine Learning, Random Thoughts, Tech Policy

Google Bard: OpenAI’s Exclusivity Deal with Microsoft may Violate Anti-Trust Law in US

This past weekend, my Executive Ethics course at Columbia Business School covered anti-trust law and anti-competitive behavior. Throughout the course of the class, as we learned more and more about how exclusive clauses stifle consumer benefit, I became increasingly curious about whether or not Microsoft’s exclusivity deal with OpenAI – to be the only cloud provider that can offer GPT-4- was potentially in violation of US antitrust law. So, naturally, I decided to ask both ChatGPT and Google Bard to provide three arguments about why this particular exclusivity deal was in violation with the Sherman Act.

Completely in alignment with my expectations, ChatGPT immediately announced that it was unable to answer such a question, but it happily explained some generics about the Sherman Act. Bard, on the other hand, gave a seemingly quite convincing breakdown of the reasons for and against an anti-competitive ruling.

It is possible that the exclusivity clause between Microsoft and OpenAI could be a violation of the Sherman Act. The Sherman Act is a federal law that prohibits anti-competitive behavior, such as agreements between companies to divide up markets or to fix prices.

The exclusivity clause in the Microsoft-OpenAI agreement states that OpenAI will not license its technology to any other cloud provider. This could give Microsoft an unfair advantage in the cloud computing market. For example, Microsoft could use its exclusive access to OpenAI's technology to develop new cloud-based products and services that would be difficult for its competitors to match.

However, it is also possible that the exclusivity clause is not a violation of the Sherman Act. The law allows for certain types of agreements that could be considered anti-competitive, if they are necessary to promote innovation or to protect intellectual property. In the case of the Microsoft-OpenAI agreement, it could be argued that the exclusivity clause is necessary to protect OpenAI's intellectual property and to ensure that its technology is used in a responsible way.

Ultimately, whether or not the exclusivity clause is a violation of the Sherman Act would be up to a court to decide. If the Federal Trade Commission (FTC) or a private company challenges the agreement, the court would need to weigh the potential benefits of the agreement against the potential harm to competition.

Here are some of the arguments that could be made in favor of and against the exclusivity clause:

Arguments in favor of the exclusivity clause:

> The exclusivity clause is necessary to protect OpenAI's intellectual property.
> The exclusivity clause will help to ensure that OpenAI's technology is used in a responsible way.
> The exclusivity clause will allow Microsoft and OpenAI to collaborate more closely on the development of AI technology.

Arguments against the exclusivity clause:

> The exclusivity clause gives Microsoft an unfair advantage in the cloud computing market.
> The exclusivity clause could stifle innovation in the AI industry.
> The exclusivity clause could harm consumers by reducing competition and leading to higher prices.

It is important to note that the exclusivity clause is just one part of the Microsoft-OpenAI agreement. The agreement also includes provisions that could promote competition, such as a requirement that Microsoft make OpenAI's technology available to other companies on a fair and reasonable basis. Ultimately, the impact of the exclusivity clause on competition will depend on how it is implemented in practice.

Of course, it’s *critical* to keep in mind what’s happening here. Bard is predicting likely arguments based on the data that has trained its massive algorithm, and contextualizing that within the specific topics that I’ve given it (“Open AI, Microsoft, Sherman Act”) to produce a reasonably-compelling argument for what I’ve asked. There are a few things that are interesting to me from this:

  1. Google and OpenAI have each made decisions about shaping their digital agents to respond to queries that may be used to generate content that might help implicate their creators in a negative way. OpenAI generated an answer that had sanitized my context. Google, on the other hand, did no such thing – and in fact, was willing – upon further provoking – more than happy to incriminate itself.
  2. Bard is not, despite my click-bait title here, actually suggesting that OpenAI and Microsoft are in violation of the Sherman Act. As we’ve recently learned from the lawyer who used ChatGPT’s made up cases, AI agents are still far from fully understanding the nuance of regulation. However, it creates convincing threads to follow along with that allows a human to use as scaffolding for their own work and research. Bard, right now, seems to be willing to play along.
  3. The arguments that Bard gives against the exclusivity clause (not a direct answer to the question I asked, but contextually correct) are fairly generic reasons that any particular exclusivity clause would be in violation of the Sherman Act. But, it’s not so generic that it would apply to other contexts – the generated text specifically calls out how it’s relating to a particular part of Microsoft’s business, and it mentions problems with innovation in a particular industry. It’s possible that this is over-fitting to training data, but it works with clever specificity as to read as very human-relevant.

I keep coming back to the idea of the “statistically average” answers that are so common with these corporate LLM-based agents. Was this an inherently useful exchange? Not especially, for me, beyond satisfying an experiment in comparing the outputs of two large language models. For someone familiar with AI, though, and not the Sherman Act, there may be some new opportunities for accelerating human knowledge as long as you question the output and consider it critically.

Anyway. I’d like to see the FTC look at answering the question that I’ve asked of ChatGPT and Bard, but as far as I’m aware, they don’t have an LLM yet. If they did, maybe we would be able to ask it how to get them to address tech monopolies. 😇