Data that reflects the intended functionality

By Lydia Gauerhof

The performance and robustness of Machine Learning (ML) approaches, such as Deep Neural Networks (DNN), rely heavily on data. Furthermore, the training data encodes the desired functionality. However, it is challenging to collect (or generate) suitable data. And … what does data suitability mean?

We use the term data suitability when the data is free from:

• under-sampling of relevant content (e.g. data features)
• unintended correlations

In this case, data reflects the intended functionality.

Let’s have a look at video-based object detection that is used for perception in automated driving…


Moving towards safe autonomous systems

By Professor John McDermid OBE FREng

Autonomy, artificial intelligence (AI), machine learning (ML): buzz words that crop up in the news, social media, and conversation every day.

The societal benefits of such technologies are evident now more than ever — quicker diagnosis of illness and disease, contactless delivery from a self-driving pod, at least in some parts of the world — and perhaps autonomous taxis in a few years.

They can also bring huge benefits to organisations:

  • quicker processing of data
  • smarter case management
  • improved efficiency.

While the potential benefits are clear, the introduction of AI and ML-based systems cannot…


Recommendations for law, policy and ethics of AI for the United Arab Emirates (UAE)

This is my final article in this series; a great way to start the New Year! After discussing the liability of autonomous systems under the UAE law, identifying gaps and comparing to other regimes, this article, in a way, is the practical conclusion of my research and so I will discuss my recommendations for law, policy and ethics for the United Arab Emirates (UAE).

1 — Legislative changes recommendations

We have a tendency to regulate any new matter in a civil law regime. Whether a decree or a law, we issue regulations with the purpose of covering any potential gaps or uncertainty. …


The role of human factors in the safe design and use of AI in healthcare

News headlines and research studies extol the virtues of artificial intelligence (AI), claiming that it can outperform a human clinician in tasks such as breast cancer screening and the treatment of sepsis.

But a study in the British Medical Journal earlier this year found that such claims were exaggerated. Too few of the studies involved randomised clinical trials, testing in a real-world clinical setting, or tracking participants over time.

In essence, the AI algorithms are being developed and tested out of their use context. …


Products, systems and organisations are increasingly dependent on data. In today’s Data-Centric Systems (DCS) data is no longer inert and passive. Its many active roles demand that data is treated as a separate system component.

By Dr Alastair Faulkner and Dr Mark Nicholson

Data is challenging to manage and control; it has a habit of being consumed by systems it was not produced for, by omission or by design, perhaps without the awareness of the system designer. It is common for it to pass (often unchecked or even unwittingly) across system and organisational boundaries.

Assuring the safety of data-centric systems

Data may be fed to systems…


A comparison with other regimes

In my first and second posts, I discussed the liability of autonomous systems under UAE law and the remedies available for injured persons.

In this post, I will compare the UAE liability regime to others, in particular the European Union regime and its approach to the liability of autonomous systems. I will also discuss the approach under common law, the regime applied in England, USA, Australia and other parts of the world.

To summarise the first two posts, the UAE liability system, under the UAE Civil Code, is mainly based on the regime of “tort” or “acts causing harm” and…


Available remedies for injury or damage caused by autonomous systems

In my first post, I discussed the liability of autonomous systems under the United Arab Emirates (UAE) law.

I mentioned that several laws govern the liability in the UAE and discussed in detail the Civil Code provisions. In particular, the regime of “tort” or “acts causing harm” and who can be liable when an autonomous system causes damage or injury.

It is worth mentioning that I am discussing the UAE law provisions in particular as the UAE National Artificial Intelligence Strategy 2031 spans several sectors, including healthcare and transport, and is intended to enable the “UAE to become the world’s…


Liability of autonomous systems under the UAE Civil Code

The main point of law is: Who is liable when an autonomous system causes injury or death to a person or damage to property?

This is the first in a series of blog posts discussing the liability of autonomous systems under United Arab Emirates (UAE) law.

As a general overview, there is no specific law that governs autonomous systems in UAE, meaning there is no law enacted on a federal or local level that specifically deals with AI regulation or policy.

However, there are several laws and regulations that cover the liability of autonomous systems, such as:

  • the UAE penal…

What “AI safety” means to them both and steps to collaboration

By Francis Rhys Ward

The term “AI safety” means different things to different people. Alongside the general community of artificial intelligence (AI) and machine learning (ML) researchers and engineers, there are two different research communities working on AI safety:

  • The assuring autonomy/safety engineering field: a community of experts with a long history in assuring real-world autonomous systems (not just AI and ML!)
  • The AI safety/longtermist community: a relatively new field focused on the consequences of transformative AI (TAI), artificial general intelligence (AGI), and smarter than human AI

Having worked…


When to stop testing automated driving functions

by Professor Simon Burton

A question I am often asked in my day job is “How much testing do we need to do before releasing this level 4 automated driving function?” — or variations thereof. I inevitably disappoint my interrogators by failing to provide a simple answer. Depending on my mood, I might play the wise-guy and quote an old computer science hero of mine:

(Program) testing can be used to show the presence of bugs, but never to show their absence! — Edsger W. Dijkstra, 1970.

Sometimes I provide the more honest…

Assuring Autonomy International Programme

A £12M partnership between @LR_Foundation and @UniOfYork to assure the safety of robotics and autonomous systems worldwide. https://twitter.com/AAIP_York

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store