Artificial Intelligence (AI) experimentation is now prolific across South African companies, with many businesses demonstrating great enthusiasm for it. How much enthusiasm? According to Business Tech, over 45% of South African businesses say that they’re already actively piloting AI within their organisations.
But the rush to release robots should not come at the expense of the data on which organisations are built. Robots bring profits, but they also bring ownership challenges.
In our experience, many South African business owners considering AI are concerned with questions like: When you move towards process automation, software bots, machine learning and artificial neural networks, what’s the smartest, safest way to proceed? Who owns the output? Can the underlying data be owned?
There tended, in the past, to be a direct link between human input to a programme and the output it produced. Today the relationship between input and output is no longer linear.
According to DataRobot CEO Jeremy Achin, AI falls into two broad categories: ‘narrow AI’, which is a simulation of human intelligence, often focused on performing one task extremely well, and ‘Artificial General Intelligence’ or AGI, which refers to machines with a general intelligence that, like a human being, can solve almost any problem.
Now, for the time being, most organisations deal with narrow artificial intelligence in the form of algorithms, as part of a greater process of machine learning. In machine learning, the rules are created by the algorithms – not by the developers of the algorithms.
A distinction has to be drawn between the output in the form of material embodiments, like compiled databases, and the underlying data. Although data is a protectable interest, our case law seems to suggest that it may not be capable of being owned. This is different to material embodiments of data, which are generally protected by copyright law.
At the risk of generalising, the underlying principle has been that only things created by humans can be protected by copyright. As such, our law provides that the author of the work (or the author’s employer) owns the copyright. And under South Africa copyright law, the author of a computer-generated work is the person who made the arrangements necessary for its creation. But, in machine learning, the algorithms (the robots, not the humans) made the arrangements for the rules to be created.
So who owns the copyright? The robots? No. Where an external service provider is involved, the authorship question rears its head again because, without taking written assignment of the copyright, the service provider – and not the organisation to which the services are delivered – may end up as the copyright owner. Given the uncertainty, the ownership of intellectual property in the context of AI should be contractually regulated upfront by agreeing who will own what.
From a business continuity point of view, it is a good idea for the organisation to make sure that it receives a perpetual license to use the AI algorithms. The license may also make provision for placing the source code and implementation documentation in escrow.
A specific and focused restraint of trade clause could also be included in the contract to prevent the service provider from implementing a similar AI solution for the benefit of a competitor down the line. The organisation should craft a specific sector in the industry that it wishes to ring-fence by the restraint, and only to the degree that is reasonably required to protect the confidential information.
The contract should also include the necessary confidentiality clauses, to establish what confidential information is and how it should be treated.