10.08.2023 Views

Waikato Business News June/July 2023

Waikato Business News has for a quarter of a century been the voice of the region’s business community, a business community with a very real commitment to innovation and an ethos of cooperation.

Waikato Business News has for a quarter of a century been the voice of the region’s business community, a business community with a very real commitment to innovation and an ethos of cooperation.

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

Tech Talk:<br />

Regulating AI<br />

ChatGPT has taken the world by storm.<br />

It’s exciting to see this next<br />

generation technology<br />

being used to make life<br />

easier through very humanlike<br />

interaction between man<br />

and machine.<br />

You can ask it any question<br />

and receive an answer<br />

that sounds like one given by a<br />

real person.<br />

And like humans,<br />

ChatGPT’s answers are limited<br />

to the data AI has learned<br />

from, and its answers get better<br />

and better as its data grows.<br />

Recently there has been<br />

much talk about the dangers of<br />

AI technology and the potential<br />

of a regulatory response<br />

to address this. Most notably<br />

in the USA, there was a congressional<br />

hearing with Sam<br />

Altman (OpenAI) and several<br />

other people from organisations<br />

in the AI space.<br />

Here’s the focal points of what<br />

Altman said regarding regulation<br />

of this technology:<br />

First, it is vital that AI companies–especially<br />

those working<br />

on the most powerful models–adhere<br />

to an appropriate<br />

set of safety requirements,<br />

including internal and external<br />

testing prior to release<br />

and publication of evaluation<br />

TECH TALK<br />

BY LUKE MCGREGOR<br />

Luke McGregor is a software<br />

architect at <strong>Waikato</strong> software<br />

specialist Company-X.<br />

results. To ensure this, the<br />

US government should consider<br />

a combination of licensing<br />

or registration requirements<br />

for development and<br />

release of AI models above<br />

a crucial threshold of capabilities,<br />

alongside incentives<br />

for full compliance with<br />

these requirements.<br />

Second, AI is a complex and<br />

rapidly evolving field. It<br />

is essential that the safety<br />

requirements that AI companies<br />

must meet have a<br />

governance regime flexible<br />

enough to adapt to new technical<br />

developments. The US<br />

government should consider<br />

facilitating multi-stakeholder<br />

processes, incorporating<br />

input from a broad range of<br />

experts and organisations,<br />

which can develop and regularly<br />

update the appropriate<br />

safety standards, evaluation<br />

requirements, disclosure<br />

practices, and external validation<br />

mechanisms for AI<br />

systems subject to license<br />

or registration.<br />

Third, we are not alone in<br />

developing this technology.<br />

It will be important for policymakers<br />

to consider how<br />

to implement licensing regulations<br />

on a global scale and<br />

ensure international cooperation<br />

on AI safety, including<br />

examining potential intergovernmental<br />

oversight mechanisms<br />

and standard setting.<br />

While from the outside<br />

asking for complex licensing,<br />

constantly changing goalposts<br />

and expensive testing<br />

procedures might seem<br />

unlikely from the CEO of an<br />

AI company, it’s important to<br />

understand how these regulatory<br />

changes would benefit<br />

OpenAI, and conversely hurt<br />

other businesses.<br />

OpenAI is a large player in<br />

the AI space, made even larger<br />

by their recent acquisition by<br />

Microsoft. This gives them the<br />

size to weather the prohibitive<br />

cost of regulatory compliance.<br />

Regulation will create barriers<br />

to entry for new competitors<br />

and consolidate more of the<br />

AI problem space into exceptionally<br />

large companies. This<br />

would be extremely profitable<br />

for the players that have<br />

already established themselves<br />

in the space, such as OpenAI,<br />

Google and Microsoft.<br />

Regulation is unlikely to<br />

move at the pace of technical<br />

development in the AI space<br />

and if it did, it would be almost<br />

impossible to keep up with<br />

those changing regulations.<br />

The technology behind<br />

OpenAI is mostly not defensible<br />

IP, other companies<br />

with enough money to train<br />

a model could compete with<br />

OpenAI's product. There are<br />

currently a wide variety of<br />

open-source models that differ<br />

from ChatGPT mostly in<br />

the quantity of training rather<br />

than the sophistication of the<br />

model itself. It's likely that<br />

regulation could cull off many<br />

emerging competitors to OpenAI,<br />

giving OpenAI some<br />

breathing space to consolidate<br />

their position.<br />

A more altruistic regulatory<br />

suggestion came from Christina<br />

Montgomery of IBM,<br />

which was transparency on<br />

when AI is in use:<br />

Be Transparent, Don’t<br />

Hide Your AI – Americans<br />

deserve to know when they<br />

are interacting with an AI<br />

system, so Congress should<br />

formalise disclosure requirements<br />

for certain uses of<br />

AI. Consumers should know<br />

when they are interacting<br />

with an AI system and<br />

whether they have recourse<br />

to engage with a real person,<br />

should they so desire. No<br />

person, anywhere, should be<br />

tricked into interacting with<br />

an AI system. AI developers<br />

should also be required to<br />

disclose technical information<br />

about the development<br />

and performance of an AI<br />

model, as well as the data<br />

used to train it, to give society<br />

better visibility into how<br />

these models operate. At IBM,<br />

we have adopted the use of AI<br />

Factsheets – think of them as<br />

similar to AI nutrition information<br />

labels – to help clients<br />

and partners better understand<br />

the operation and performance<br />

of the AI models<br />

we create.<br />

This seems like a far more<br />

useful regulation, not only<br />

would it be inexpensive to<br />

implement and wouldn't lock<br />

out inexperienced players, but<br />

it would also provide users<br />

with informed choice.<br />

The regulation of AI technology<br />

will focus control into<br />

exceptionally large companies<br />

that can stifle innovation.<br />

AI is heavily based on<br />

data, and the total capabilities<br />

of any system are limited<br />

by the quantity and quality<br />

of training data. One of the<br />

fundamental ways of protecting<br />

people from the negative<br />

impacts of AI is to control<br />

the data that users give to<br />

such systems.<br />

As with many technologies,<br />

there are implications of sharing<br />

data. A better understanding<br />

about the personal costs of<br />

sharing data with AI will help<br />

us all make more informed<br />

decisions about who we share<br />

data with and what we let them<br />

do with it.<br />

Consumers should be looking<br />

for products that provide<br />

us strong guarantees of privacy<br />

and data security. Realistically<br />

we need to understand that this<br />

comes with an increased direct<br />

financial cost in exchange for<br />

our long-term digital security.<br />

Procuta Associates<br />

Urban + Architecture<br />

MANU KOROKII FOR SANCTUARY MOUNTAIN MAUNGATAUTARI<br />

Contact us 07 839 6521<br />

www.pauaarchitects.co.nz

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!