Computer Vision Privacy: We take it seriously.

We understand how important privacy is to our customers.

May 9, 2021
4 mins
Computer Vision Privacy: We take it seriously.

‍

We understand how important privacy is to our customers. That's why we have built the Protex platform using privacy design principles and tools to ensure organisational and individual privacy is preserved. This research-focused article discusses computer vision and some of the privacy design frameworks we are using to build the Protex platform.

‍

"If you don't feel like reading the whole article, then just take it that at Protex AI we really understand the computer vision privacy landscape and take it extremely seriously!"

‍

Introduction‍

Computer vision (CV) is the term given to the technology that allows computers to gain a high level of understanding from digital images or videos. Classical computer vision techniques have existed since the early 1960s but recent advancements in deep-learning algorithms, cameras, networks, and computing capabilities have allowed CV to flourish. Computer vision holds tremendous potential to solve problems in the healthcare, automotive, and manufacturing industries to name but a few. The widespread adoption of this technology is driven by these advancements and the availability of cheaper camera infrastructure. Although these computer vision-based systems can add substantial value to organisations and wider society, they also pose a myriad of ethical threats, the majority of which center around privacy. The concept of privacy acts as a barrier in many cases, to the deployment of and thus indisputable benefits that this technology can offer. 

The field of privacy in computer vision has recently become a burgeoning research area, as more and more organisations are beginning to realise and understand the potential privacy concerns that computer vision technologies could pose. The understanding of the implications of a privacy breach as well as the emphasis being put on data privacy has been driven by recent high-profile examples which have garnered a significant amount of negative media attention. Companies are therefore putting a significant emphasis on ensuring that their systems are developed with and underpinned by strong privacy and security research. There is a particular interplay between security and privacy mechanisms in modern technology systems. While the security designs embedded in these systems are focused on safeguarding the data stored, the privacy principles implemented are focused on safeguarding user identity. There is always a risk of a data breach in any system, which even tech giants like Facebook and Twitter have been unable to prevent. It is therefore of the utmost importance that in the event of a data leak, any data leaked is encrypted, encoded, or structure in a manner that preserves the privacy of data stakeholders.

The concept of privacy is difficult to measure and in some cases, define. This definition is based on strict individualistic definitions in the western world while in the east the definitions are looser and based more upon the collective community [Dufresne-Camaro 2020]. This is an example of how the very fundamental principles upon which the definition is based, can differ drastically. As such, there are a multitude of grey areas when privacy is discussed in relation to video data and therefore it can be difficult to outline a set of concrete rules or guidelines that are compatible with every definition and application. The frameworks that have been designed and used to inform how this technology should be used are based on guidelines and policies.  

Privacy by Design - A Privacy Design Framework

Privacy by Design (PbD) was initially developed and proposed by Ann Cavoukian and formalised by a joint team of data commissioners in 1995. The framework was developed in response to the growing amount of information being created in the industrial manufacturing sector and the growing need to manage it responsibly. Furthermore, Cavoukian referenced increasing system complexity as a factor that presents profound challenges for informational privacy [Cavoukian 2006]. The framework is based on the active embedding of privacy and centers around 7 principles, described in Fig 1.

Fig 1. Privacy by Design (PbD) Overview

Each of these principles informs system design by presenting high-level objectives to ensure Privacy by Design. They are briefly outlined below:

1. Proactive not Reactive, Preventative not Remedial: Privacy risks should be anticipated, and action taken before a privacy infraction occurs. Therefore, PbD is a before-the-fact design measure instead of one which offers remedies to privacy shortcomings

2. Privacy as the Default Setting: Personal sensitive data should be detected automatically meaning that the user should not have to enable privacy, it should be enabled by default.

3. Privacy Embedded into Design: Privacy should not be treated as an add on it should be integral to the system design. It should be an essential component of the core functionality delivered by the system.

4. Full Functionality – Positive-Sum, not Zero-Sum: The implementation of privacy should not result in any unnecessary trade-offs with any other component of the system. It should not hamper the functionality of the system rendering it useless.

5. End-to-End Security – Full Lifecycle Protection: Strong security measures are essential to ensure all data is securely retained during its life cycle and securely destroyed thereafter.

6. Visibility and Transparency – Keep it Open: All stakeholders of the system and furthermore, of them should be made aware of the privacy practices or lack thereof that are in place.

7. Respect for user Privacy – Keep it User-Centric: The users of the system are core to any decision being made regarding the privacy of data in any system.

This framework came about in an era when the internet and cloud-based systems were beginning to become ubiquitous and central to the way in which large organisations managed their data. A parallel could be drawn between the early 2000s and now, where recent advancements in the Artificial intelligence and Computer Vision technologies have presented a variety of new challenges with respect to informational privacy. These issues particularly pertain to the complex, convoluted AI-based systems that produce large quantities of data which can in some cases can be personally sensitive by nature. It is this reason why, although the foundational principles of PbD are relevant and still hold importance, they also struggle to implicitly define how these systems should be built and instead exist to inform high-level privacy considerations. These challenges, amongst others, are presented in [Spiekermann 2012]. The article discusses a common misconception of a privacy framework, particularly PbD is that you can simply take a few Privacy-Enhancing Technologies (PETs) and add a good dose of security, thereby creating a fault-proof systems’ landscape for the future. Spiekermann discusses three main challenges :

1. Privacy is a fuzzy concept and is thus difficult to protect. We need to come to terms with what it is we want to protect. Moreover, conceptually and methodologically, privacy is often confounded with security. We need to start distinguishing security from privacy.

2. No agreed-upon methodology supports the systematic engineering of privacy into systems. System development life cycles rarely leave room for privacy considerations.

3. Little knowledge exists about the tangible and intangible benefits and risks associated

"At Protex AI we not only understand how important privacy is to our customers, we also understand the mechanisms and technologies needed to implement a fully privacy preserving platform!"

‍