Data that reflects the intended functionality

By Lydia Gauerhof

The performance and robustness of Machine Learning (ML) approaches, such as Deep Neural Networks (DNN), rely heavily on data. Furthermore, the training data encodes the desired functionality. However, it is challenging to collect (or generate) suitable data. And … what does…

Moving towards safe autonomous systems

By Professor John McDermid OBE FREng

Autonomy, artificial intelligence (AI), machine learning (ML): buzz words that crop up in the news, social media, and conversation every day.

The societal benefits of such technologies are evident now more than ever — quicker diagnosis of illness and disease, contactless delivery from a…

Liability of autonomous systems under the UAE Civil Code

The main point of law is: Who is liable when an autonomous system causes injury or death to a person or damage to property?

This is the first in a series of blog posts discussing the liability of autonomous systems under United Arab Emirates (UAE) law.

As a general overview…

What “AI safety” means to them both and steps to collaboration

By Francis Rhys Ward

The term “AI safety” means different things to different people. Alongside the general community of artificial intelligence (AI) and machine learning (ML) researchers and engineers, there are two different research communities working on AI safety:

Assuring Autonomy International Programme

A £12M partnership between @LR_Foundation and @UniOfYork to assure the safety of robotics and autonomous systems worldwide. https://twitter.com/AAIP_York

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store