UK: Electronics Newsletter

Last Updated: 29 March 2019
Article by David Lewin, Frances Wilding and James Ward

DeepMind

Artificial Intelligence beyond AlphaGo

DeepMind Technologies Limited is a British company now owned by Google. DeepMind is perhaps best known for AlphaGo, its AI-based software that plays the game of Go, evidently at least to a world championship standard. However, DeepMind's AI patenting activities indicate other interests going beyond board games and covering various fields of application of AI.

DeepMind Patent Application Families

Up to the present time1 , members of around 58 DeepMind families2 of patent applications have been published. Titles and application/patent numbers of key members3 of each family are shown in Table 4 below. A total of 85 members of the families have been published up to the present time1. Members of 4 families were published in 2014, having filing dates in 2012, 2013, and 2014. However, members of the other families are more recent, being published in 2018 and 2019, with application dates from 2016 to 2018. It is to be expected that more families and family members will appear as further publications take place.

DeepMind Patents

As would be expected with such a young portfolio, few patents have been granted so far. These are listed in Table 1. Claim 1 of each of the US patents is reproduced further below. So far, there are no granted European patents in the published record1.

 

Table 1

DeepMind Patents

CN105144203(B)

Signal processing systems

US9342781(B2)

 

US10032089(B2)

Spatial transformer

 

modules

US10176424(B2)

Generative neural

 

networks

US10198832(B2)

Generalizable medical

 

image analysis using

 

segmentation and

 

classification neural

 

networks

US8644607(B1)

Method and apparatus for

US8971669(B2)

image processing

 

Fields of Interest for DeepMind

Although DeepMind's interests go beyond board games, they appear to be focused on a narrow range of technologies, which is indicated by detailed classifications of the members of the DeepMind portfolio under the Cooperative Patent Classification (CPC) hierarchy1 .

Each patent or application is usually given more than one detailed classification to main groups or sub-groups of the CPC. DeepMind's families have been given 235 detailed classifications, all falling within only 9 CPC sub-classes, as shown in Table 2. The vast majority of detailed classifications fall within sub-class G06N, which covers – amongst other things – neural networks. Seen in more detail, the 235 classifications range across only 65 CPC main/sub-groups, with 80% of those classifications accounted for by 17 main/sub-groups as shown in Table 3. These focus on neural networks, machine learning, and the context of natural language processing, image analysis, speech synthesis etc., with a surprise mention of musical instruments.

   

Table 2

CPC sub-class

No. of detailed classifications

CPC description

 

within sub-class

 

G06N

187

Computer systems based on specific computational models

G06K

18

Recognition of data; presentation of data; record carriers;

   

handling record carriers

G06F

11

Electric digital data processing

G06T

8

Image data processing or generation, in general

G10L

5

Speech analysis or synthesis; speech recognition; speech or

   

voice processing; speech or audio coding or decoding

G10H

2

Electrophonic musical instruments

H04N

2

Pictorial communication, e.g. television

G05B

1

Control or regulating systems in general; functional elements

   

of such systems; monitoring or testing arrangements for such

   

systems or elements

Y04S

1

Systems integrating technologies related to power network

   

operation, communication or information technologies for

   

improving the electrical power generation, transmission,

   

distribution, management or usage, i.e. smart grids

   

Table 3

CPC main group/

No. of detailed classifications

CPC description

sub-group

to main group/sub-group

 

G06N3/0454

38

Computer systems based on biological models; Architectures,

   

e.g. interconnection topology; using a combination of multiple

   

neural nets

G06N3/08

31

Computer systems based on biological models; Learning

   

methods

G06N3/04

22

Computer systems based on biological models; Architectures,

   

e.g. interconnection topology

G06N3/0445

22

Computer systems based on biological models; Architectures,

   

e.g. interconnection topology; Feedback networks, e.g.

   

hopfield nets, associative networks

G06N3/006

17

Computer systems based on biological models; Physical

   

realisation, i.e. hardware implementation of neural networks,

   

neurons or parts of neurons

G06N3/084

13

Computer systems based on biological models; Learning

   

methods; Back-propagation

G06N3/088

10

Computer systems based on biological models; Learning

   

methods; Non-supervised learning, e.g. competitive learning

G06N3/0472

9

Computer systems based on biological models; Architectures,

   

e.g. interconnection topology; using probabilistic elements,

   

e.g. p-rams, stochastic processors

G06N3/00

8

Computer systems based on biological models

G06N3/063

3

Computer systems based on biological models; Physical

   

realisation, i.e. hardware implementation of neural networks,

   

neurons or parts of neurons

G06F17/2818

2

Digital computing or data processing equipment or methods,

   

specially adapted for specific functions; Processing or

   

translating of natural language; Statistical methods, e.g.

   

probability models

G06K9/4628

2

Methods or arrangements for reading or recognising printed

   

or written characters or for recognising patterns, e.g.

   

fingerprints; Extraction of features or characteristics of the

   

image; integrating the filters into a hierarchical structure

G06N3/02

2

Computer systems based on biological models; using neural

   

network models

G06N3/0481

2

Computer systems based on biological models; Architectures,

   

e.g. interconnection topology; Non-linear activation functions,

   

e.g. sigmoids, thresholds

G06N3/082

2

Computer systems based on biological models; Learning

   

methods; modifying the architecture, e.g. adding or deleting

   

nodes or connections, pruning

G10H2250/311

2

Aspects of algorithms or signal processing methods without

   

intrinsic musical character, yet specifically adapted for or used

   

in electrophonic musical processing; Neural networks for

   

electrophonic musical instruments or musical processing, e.g.

   

for musical recognition or control, automatic composition or

   

improvisation

G10L13/00

2

Speech synthesis; Text to speech systems

DeepMind Patent US9342781 (B2)

  1. A neural network system implemented as one or more computers for generating samples of a particular sample type, wherein each generated sample belongs to a respective category of a predetermined set of categories, and wherein each generated sample is an ordered collection of values, each value having

a sample position in the collection, and wherein the system comprises:

a first stochastic layer configured to stochastically select a category from the predetermined set of categories;

a first deterministic subnetwork configured to: receive an embedding vector corresponding to the selected category, and

process the embedding vector to generate a respective sample score for each sample position in the collection; and

a second stochastic layer configured to generate an output sample by stochastically selecting, for each sample position, a sample value using the sample score for the sample position.

DeepMind Patent US10032089 (B2)

  1. An image processing neural network system implemented by one or more computers, wherein the image processing neural network system is configured to receive one or more input images and to process the one or more input images to generate a neural network output from the one or more input images, the image processing neural network system comprising:

a spatial transformer module, wherein the spatial transformer module is configured to perform operations comprising:

receiving an input feature map derived from the one or more input images, and

applying a spatial transformation to the input feature map to generate a transformed feature map, comprising:

processing the input feature map to generate, based on the input feature map, spatial transformation parameters that define the spatial transformation to be applied to the input feature map, and

sampling from the input feature map in accordance with the spatial transformation parameters generated based on the input feature map to generate the transformed feature map.

DeepMind Patent US10176424 (B2)

  1. A neural network system implemented by one or more computers, the neural network system comprising:

a recurrent neural network that is configured to, for each time step of a predetermined number of time steps, receiv e a set of latent variables for the time step and process the set of latent variables to update a hidden state of the recurrent neural network; and

a generative subsystem that is configured to:

for each time step of the predetermined number of time steps:

generate the set of latent variables for the time step and provide the set of latent variables as input to the recurrent neural network;

update a hidden canvas using the updated hidden state of the recurrent neural network; and

for a last time step of the predetermined number of time steps:

generate an output image using the updated hidden canvas for the last time step.

DeepMind Patent US10198832 (B2)

  1. A system comprising one or more computers and one or more storage devices storing instructions that when executed by the one or more computers cause the one or more computers to implement:

a first set of one or more segmentation neural networks, wherein each segmentation neural network in the first set is configured to:

receive an input image of eye tissue captured using a first imaging modality; and

process the input image to generate a segmentation map that segments the eye tissue in the input image into a plurality of tissue types;

a set of one or more classification neural networks, wherein each classification neural network is configured to:

receive a classification input derived from a segmentation map of eye tissue; and

process the classification input to generate a classification output that characterizes the eye tissue; and a subsystem configured to:

receive a first image of eye tissue captured using the first imaging modality;

provide the first image as input to each of the segmentation neural networks in the first set to obtain one or more segmentation maps of the eye tissue in the first image;

generate, from each of the segmentation maps, a respective classification input; and

provide, for each of the segmentation maps, the classification input for the segmentation map as input to each of the classification neural networks to obtain, for each segmentation map, a respective classification output from each classification neural network; and

generate, from the respective classification outputs for each of the segmentation maps, a final classification output for the first image.

DeepMind Patent US8644607 (B1)

  1. A method for processing an image to generate a signature which is characteristic of a pattern within said image comprising:

receiving an image;

overlaying a window at multiple locations on said image to define a plurality of sub-images within said image, with each sub-image each having a plurality of pixels having a luminance level;

determining a luminance value for each said sub-image, wherein said luminance value is derived from said luminance levels of said plurality of pixels;

combining said luminance values for each of said sub-images to form said signature;

wherein said combining is such that said signature is independent of the location of each sub-image.

DeepMind Patent US8971669 (B2)

  1. A non-transitory computer readable medium storing a computer program code that, when executed by one or more computers, causes the one or more computers to perform operations for processing an image to generate a signature which is characteristic of a pattern within the image, the operations comprising:

receiving an image;

overlaying a window at multiple locations on the image to define a plurality of sub-images within the image, with sub-image having a plurality of pixels having a luminance level;

determining a luminance value for each sub-image, wherein said luminance value is derived from the luminance levels of the plurality of pixels in the sub-image; and

combining the luminance values for each of the sub-images to form a signature for the image;

wherein the combining is such that the signature is independent of the location of each sub-image.

 

Table 4

Title

Publication number

Method And Apparatus For Image Searching

US2014019484 (A1)

Method And Apparatus For Conducting A Search

US2014019431 (A1)

Method And Apparatus For Image Processing

US2014185959 (A1);

 

US8971669 (B2)

Signal Processing Systems

GB2513105 (A)

Generative Neural Networks

US10176424 (B2);

 

US2017228633 (A1)

Umgebungsnavigation Unter Verwendung Von Verstärkungslernen

DE202017106697 (U1)

Processing Sequences Using Convolutional Neural Networks

WO2018048945 (A1)

Generating Video Frames Using Neural Networks

WO2018064591 (A1)

Neural Networks For Selecting Actions To Be Performed By A Robotic Agent

WO2018071392 (A1)

Reinforcement Learning With Auxiliary Tasks

WO2018083671 (A1)

Sequence Transduction Neural Networks

WO2018083670 (A1)

Recurrent Neural Networks

WO2018083669 (A1)

Scene Understanding And Generation Using Neural Networks

WO2018083668 (A1)

Reinforcement Learning Systems

WO2018083667 (A1)

Training Action Selection Neural Networks

WO2018083532 (A1)

Continuous Control With Deep Reinforcement Learning

MX2018000942 (A)

Data-Efficient Reinforcement Learning For Continuous Control Tasks

WO2018142212 (A1)

Memory Augmented Generative Temporal Models

WO2018142378 (A1)

Neural Programming

EP3360082 (A1)

Augmenting Neural Networks With External Memory

KR20180091850 (A)

Neural Episodic Control

WO2018154100 (A1)

Multiscale Image Generation

WO2018154092 (A1)

Action Selection For Reinforcement Learning Using Neural Networks

WO2018153807 (A1)

Training Machine Learning Models

WO2018153806 (A1)

Dueling Deep Neural Networks

US2018260689 (A1)

Asynchronous Deep Reinforcement Learning

US2018260708 (A1)

Training Neural Networks Using Posterior Sharpening

WO2018172513 (A1)

Selecting Action Slates Using Reinforcement Learning

EP3384435 (A1)

Distributional Reinforcement Learning

WO2018189404 (A1)

Black-Box Optimization Using Neural Networks

WO2018189279 (A1)

Generating Images Using Neural Networks

CN108701249 (A)

Training Neural Networks Using A Prioritized Experience Memory

CN108701252 (A)

Training Neural Networks Using Normalized Target Outputs

CN108701253 (A)

Associative Long Short-Term Memory Neural Network Layers

EP3398118 (A1)

Compressing Images Using Neural Networks

EP3398114 (A1)

Augmenting Neural Networks With External Memory

EP3398117 (A1)

Generating Audio Using Neural Networks

US2018322891 (A1)

Processing Text Sequences Using Neural Networks

US2018329897 (A1)

Spatial Transformer Modules

US2018330185 (A1)

Generating Output Examples Using Bit Blocks

US2018336455 (A1)

Programmable Reinforcement Learning Systems

WO2018211146 (A1)

Making Object-Level Predictions Of The Future State Of A Physical System

WO2018211144 (A1)

Neural Network System

WO2018211143 (A1)

Imagination-Based Agent Neural Networks

WO2018211142 (A1)

Imagination-Based Agent Neural Networks

WO2018211141 (A1)

Data Efficient Imitation Of Diverse Behaviors

WO2018211140 (A1)

Training Action Selection Neural Networks Using A Differentiable Credit Function

WO2018211139 (A1)

Multitask Neural Network Systems

WO2018211138 (A1)

Neural Network Systems For Action Recognition In Videos

WO2018210796 (A1)

Training Action Selection Neural Networks Using Look-Ahead Search

WO2018215665 (A1)

Noisy Neural Network Layers

WO2018215344 (A1)

Training Action Selection Neural Networks

WO2018224695 (A1)

Generating Discrete Latent Representations Of Input Data Items

WO2018224690 (A1)

Selecting Actions Using Multi-Modal Inputs

WO2018224471 (A1)

Feedforward Generative Neural Networks

US2018365554 (A1)

Generalizable Medical Image Analysis Using Segmentation And Classification

US10198832 (B2);

Neural Networks

US2019005684 (A1)

Training Action Selection Neural Networks Using Apprenticeship

WO2019002465 (A1)

Learning Visual Concepts Using Neural Networks

WO2019011968 (A1)

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

To print this article, all you need is to be registered on Mondaq.com.

Click to Login as an existing user or Register so you can print this article.

Authors
 
In association with
Related Topics
 
Related Articles
 
Related Video
Up-coming Events Search
Tools
Print
Font Size:
Translation
Channels
Mondaq on Twitter
 
Mondaq Free Registration
Gain access to Mondaq global archive of over 375,000 articles covering 200 countries with a personalised News Alert and automatic login on this device.
Mondaq News Alert (some suggested topics and region)
Select Topics
Registration (please scroll down to set your data preferences)

Mondaq Ltd requires you to register and provide information that personally identifies you, including your content preferences, for three primary purposes (full details of Mondaq’s use of your personal data can be found in our Privacy and Cookies Notice):

  • To allow you to personalize the Mondaq websites you are visiting to show content ("Content") relevant to your interests.
  • To enable features such as password reminder, news alerts, email a colleague, and linking from Mondaq (and its affiliate sites) to your website.
  • To produce demographic feedback for our content providers ("Contributors") who contribute Content for free for your use.

Mondaq hopes that our registered users will support us in maintaining our free to view business model by consenting to our use of your personal data as described below.

Mondaq has a "free to view" business model. Our services are paid for by Contributors in exchange for Mondaq providing them with access to information about who accesses their content. Once personal data is transferred to our Contributors they become a data controller of this personal data. They use it to measure the response that their articles are receiving, as a form of market research. They may also use it to provide Mondaq users with information about their products and services.

Details of each Contributor to which your personal data will be transferred is clearly stated within the Content that you access. For full details of how this Contributor will use your personal data, you should review the Contributor’s own Privacy Notice.

Please indicate your preference below:

Yes, I am happy to support Mondaq in maintaining its free to view business model by agreeing to allow Mondaq to share my personal data with Contributors whose Content I access
No, I do not want Mondaq to share my personal data with Contributors

Also please let us know whether you are happy to receive communications promoting products and services offered by Mondaq:

Yes, I am happy to received promotional communications from Mondaq
No, please do not send me promotional communications from Mondaq
Terms & Conditions

Mondaq.com (the Website) is owned and managed by Mondaq Ltd (Mondaq). Mondaq grants you a non-exclusive, revocable licence to access the Website and associated services, such as the Mondaq News Alerts (Services), subject to and in consideration of your compliance with the following terms and conditions of use (Terms). Your use of the Website and/or Services constitutes your agreement to the Terms. Mondaq may terminate your use of the Website and Services if you are in breach of these Terms or if Mondaq decides to terminate the licence granted hereunder for any reason whatsoever.

Use of www.mondaq.com

To Use Mondaq.com you must be: eighteen (18) years old or over; legally capable of entering into binding contracts; and not in any way prohibited by the applicable law to enter into these Terms in the jurisdiction which you are currently located.

You may use the Website as an unregistered user, however, you are required to register as a user if you wish to read the full text of the Content or to receive the Services.

You may not modify, publish, transmit, transfer or sell, reproduce, create derivative works from, distribute, perform, link, display, or in any way exploit any of the Content, in whole or in part, except as expressly permitted in these Terms or with the prior written consent of Mondaq. You may not use electronic or other means to extract details or information from the Content. Nor shall you extract information about users or Contributors in order to offer them any services or products.

In your use of the Website and/or Services you shall: comply with all applicable laws, regulations, directives and legislations which apply to your Use of the Website and/or Services in whatever country you are physically located including without limitation any and all consumer law, export control laws and regulations; provide to us true, correct and accurate information and promptly inform us in the event that any information that you have provided to us changes or becomes inaccurate; notify Mondaq immediately of any circumstances where you have reason to believe that any Intellectual Property Rights or any other rights of any third party may have been infringed; co-operate with reasonable security or other checks or requests for information made by Mondaq from time to time; and at all times be fully liable for the breach of any of these Terms by a third party using your login details to access the Website and/or Services

however, you shall not: do anything likely to impair, interfere with or damage or cause harm or distress to any persons, or the network; do anything that will infringe any Intellectual Property Rights or other rights of Mondaq or any third party; or use the Website, Services and/or Content otherwise than in accordance with these Terms; use any trade marks or service marks of Mondaq or the Contributors, or do anything which may be seen to take unfair advantage of the reputation and goodwill of Mondaq or the Contributors, or the Website, Services and/or Content.

Mondaq reserves the right, in its sole discretion, to take any action that it deems necessary and appropriate in the event it considers that there is a breach or threatened breach of the Terms.

Mondaq’s Rights and Obligations

Unless otherwise expressly set out to the contrary, nothing in these Terms shall serve to transfer from Mondaq to you, any Intellectual Property Rights owned by and/or licensed to Mondaq and all rights, title and interest in and to such Intellectual Property Rights will remain exclusively with Mondaq and/or its licensors.

Mondaq shall use its reasonable endeavours to make the Website and Services available to you at all times, but we cannot guarantee an uninterrupted and fault free service.

Mondaq reserves the right to make changes to the services and/or the Website or part thereof, from time to time, and we may add, remove, modify and/or vary any elements of features and functionalities of the Website or the services.

Mondaq also reserves the right from time to time to monitor your Use of the Website and/or services.

Disclaimer

The Content is general information only. It is not intended to constitute legal advice or seek to be the complete and comprehensive statement of the law, nor is it intended to address your specific requirements or provide advice on which reliance should be placed. Mondaq and/or its Contributors and other suppliers make no representations about the suitability of the information contained in the Content for any purpose. All Content provided "as is" without warranty of any kind. Mondaq and/or its Contributors and other suppliers hereby exclude and disclaim all representations, warranties or guarantees with regard to the Content, including all implied warranties and conditions of merchantability, fitness for a particular purpose, title and non-infringement. To the maximum extent permitted by law, Mondaq expressly excludes all representations, warranties, obligations, and liabilities arising out of or in connection with all Content. In no event shall Mondaq and/or its respective suppliers be liable for any special, indirect or consequential damages or any damages whatsoever resulting from loss of use, data or profits, whether in an action of contract, negligence or other tortious action, arising out of or in connection with the use of the Content or performance of Mondaq’s Services.

General

Mondaq may alter or amend these Terms by amending them on the Website. By continuing to Use the Services and/or the Website after such amendment, you will be deemed to have accepted any amendment to these Terms.

These Terms shall be governed by and construed in accordance with the laws of England and Wales and you irrevocably submit to the exclusive jurisdiction of the courts of England and Wales to settle any dispute which may arise out of or in connection with these Terms. If you live outside the United Kingdom, English law shall apply only to the extent that English law shall not deprive you of any legal protection accorded in accordance with the law of the place where you are habitually resident ("Local Law"). In the event English law deprives you of any legal protection which is accorded to you under Local Law, then these terms shall be governed by Local Law and any dispute or claim arising out of or in connection with these Terms shall be subject to the non-exclusive jurisdiction of the courts where you are habitually resident.

You may print and keep a copy of these Terms, which form the entire agreement between you and Mondaq and supersede any other communications or advertising in respect of the Service and/or the Website.

No delay in exercising or non-exercise by you and/or Mondaq of any of its rights under or in connection with these Terms shall operate as a waiver or release of each of your or Mondaq’s right. Rather, any such waiver or release must be specifically granted in writing signed by the party granting it.

If any part of these Terms is held unenforceable, that part shall be enforced to the maximum extent permissible so as to give effect to the intent of the parties, and the Terms shall continue in full force and effect.

Mondaq shall not incur any liability to you on account of any loss or damage resulting from any delay or failure to perform all or any part of these Terms if such delay or failure is caused, in whole or in part, by events, occurrences, or causes beyond the control of Mondaq. Such events, occurrences or causes will include, without limitation, acts of God, strikes, lockouts, server and network failure, riots, acts of war, earthquakes, fire and explosions.

By clicking Register you state you have read and agree to our Terms and Conditions