Video evidence has been a vital tool in criminal investigations and court proceedings. It provides a visual representation of events and can be a powerful tool in allowing juries to reach an informed verdict. However, as technology advances, there are growing concerns about the reliability of video evidence and its ability to be manipulated. With the rise of artificial intelligence (AI), these concerns are put in the spotlight.
TLDR at the bottom
The Problem of Deepfakes
A deepfake is a type of synthetic media that uses artificial intelligence (AI) to create realistic video, audio or images that are designed to deceive the viewer. Deepfakes are created by using a machine learning algorithm to analyse and manipulate a large amount of data, such as photos, videos or audio recordings. This data is then used to create a digital model of a person's face or voice, which can be used to generate new content that appears to be genuine to the viewer.
Deepfakes are becoming increasingly sophisticated and realistic, and they have been used for a range of purposes, including entertainment, political propaganda, and fraud. In fact, the potential harm of deepfakes has already been recognized by policymakers. In 2019, the US House Intelligence Committee held a hearing to discuss the possible effects that deepfakes could have on elections and democracy.
The committee heard from experts who testified that deepfakes could be used to spread disinformation and manipulate public opinion, leading to a loss of trust in institutions and even the erosion of democracy itself. The hearing highlighted the urgent need for policymakers to address the threats posed by deepfakes and to develop new strategies for detecting and combating them.
Deepfakes are becoming increasingly sophisticated and difficult to detect. In fact, a recent study conducted by the cybersecurity company Recorded Future found that deepfakes are getting better and more widespread. This is a significant concern for the legal system, as video evidence is often used to identify and prosecute criminals. With deepfakes the person in a video could be altered or a person could be added entirely. This could be used by a threat actor to distort the reality of events. In reality that could lead a jury to form an incorrect judgment and misguide the course of justice. This comes with a whole wealth of consequences.
How reliable is video evidence?
Even authentic video evidence can be unreliable. However, there are several factors that can affect the accuracy and reliability of video evidence, including poor lighting, camera angles, and compression artifacts. Additionally, video evidence can be edited to highlight or downplay certain aspects of an event, potentially biasing the way that jurors interpret the evidence.
There have been several studies conducted on the reliability of video evidence in court. For example, a study published in the Journal of Criminal Law and Criminology found that jurors are more likely to convict defendants when presented with video evidence, regardless of whether the evidence is actually relevant to the case. Another study published in the same journal found that video evidence is more persuasive than eyewitness testimony. This makes sense as it would be possible for an eye witness to misrepresent or forget what happened. Or in the worse cases to lie to the jurors to push an agenda or help a defendant. Deepfakes could change this as videos could be changed in such a way that could seem authentic and as they are a video they could carry more weight than a testimony.
The solution
However, there are also ways that AI could be used to enhance the reliability of video evidence. For example, AI algorithms could be used to analyse video footage and identify potential issues with the evidence, such as poor lighting or camera angles. AI could also be used to identify patterns and anomalies in video footage that could indicate tampering.
One study conducted by researchers at the University of California, Berkeley found that AI algorithms could be used to identify deepfakes with a high degree of accuracy. The researchers used a machine learning model to analyse a large dataset of deepfakes and authentic videos, and were able to accurately identify deepfakes with a success rate of 95%.
Another study conducted by researchers at the University of Surrey found that AI could be used to detect changes made to authentic video evidence. The researchers used a machine learning algorithm to analyse video footage and identify changes that had been made to the footage, such as edits or image tampering. The algorithm was able to detect these changes with a high degree of accuracy.
TL:DR
Video evidence is very important in court and as part of the legal system. However, deepfakes are able to manipulate video the misrepresent the truth very convincingly. On the other hand, AI can be used to detect the deepfakes with a very high accuracy (Sometimes up to 95%). Ultimately there isn’t a single fix but it is food for thought for the future
Comments