New AI Camera Security Systems
Geneva Simms and Nathan Lavertue were driving to their country house in Dutchess County one recent weekend when there was yet another spring snowstorm. But when they arrived late that night, their home, which was built around 1780 and was once a Quaker meetinghouse and a stop on the Underground Railroad, wasn’t bitterly cold. That is because during his lunch break in Brooklyn, Mr. Lavertue had turned on the heat remotely, using his smartphone.
“We have three cameras — two exterior and one interior — four Nest thermostats, two Echo Dots, one Echo Show, one traditional Echo and 10 smart lights. And also the Nest smoke detector,” said Mr. Lavertue, a global experience design director for IBM, who installed the equipment himself. “The cameras are for security, but they provide plenty of entertainment. I have really funny footage of Geneva running after a U.P.S. truck.”
Like a lot of second-home buyers, Mr. Lavertue and Ms. Simms, a personal trainer, corporate wellness coach and founder of Empower to Power, were overwhelmed at first with figuring out how to protect their country house when they weren’t there. The married couple, who rent in Brooklyn and had never owned before buying their Stanfordville house last year, researched their options and then turned to do-it-yourself smart home components. They plan to add another smoke/carbon monoxide detector, several more smart thermostats and a digital front-door lock from August to complete their home security system.
Artificially intelligent security cameras are spotting crimes before they happen
Next time you see a surveillance camera following you down an alleyway, don't be too sure that there's a human watching.
Surveillance camera companies are increasingly relying on artificial intelligence (AI) to automatically identify and detect problematic and criminal behavior as it happens — everything from public intoxication to trespassing.
An automated camera system called AIsight (pronounced eyesight), installed in Boston after the 2013 marathon bombing, monitors camera feeds in real time and alerts authorities if it spots unusual activity, according to Bloomberg.
AIsight cameras use a statistical method called machine learning to "learn what is normal for an area and then alerts on abnormal activity," according to its creator, Behavioral Recognition System Labs.
Slate reports that could mean picking up anything from "unusual loitering to activity occurring in restricted areas."
"We are recognizing a precursor pattern that may be associated with a crime that happens," Wesley Cobb, chief science officer at the company, told Bloomberg. "Casing the joint, poking around where he shouldn't be, going around looking at the back entrances to buildings."
And these systems aren't just looking for criminals. In early August, West Japan Railway installed 46 security cameras that can "automatically search for signs of intoxication" in passengers at the Kyobashi train station in Osaka, Japan, according to Wall Street Journal.
The AI watches for people stumbling, napping on benches, or standing motionless on the platform for long periods of time before lurching to move. The system can then alert human attendants if the person is in danger of falling on the tracks or hurting themselves.
Drunken passengers frequently fall or stumble off the train platform. West Japan Railway conducted a study that found 60% of the 221 people hit by trains in Japan in 2013 were intoxicated, the Wall Street Journal reports.
AI camera Japan train
This graphic shows how the system works.West Japan Railway Company
Using AI in surveillance systems makes sense — AI can catch what humans miss, operate around the clock, never get tired, or fall asleep on the job. But it raises concerns with "privacy and civil liberties advocates," because it "treats everyone as a potential criminal and targets people for crimes they have not yet committed," according to Slate.
Stuart Russell, AI researcher at University of California, Berkeley and co-author of the standard textbook "Artificial Intelligence: A Modern Approach," thinks intelligent "watching" programs will likely freak people out more than a human monitor does, even though& most people would reasonably expect they were being watched if they encounter a surveillance camera.
"What if there's an AI system, which actually can understand everything that you're doing?" Russell told Tech Insider. "Would that feel different from a human watching directly? I expect people are going to feel differently about that once they're aware that AI systems can watch through a camera and can, in some sense, understand what it's seeing."
This is just one of the many security and privacy issues that courts will have to grapple with as AI improves in the coming years, like the legality of AI that can buy up tickets, and then scalp them online.
AI for Crime Prevention and Detection – Current Applications
Companies and cities all over world are experimenting with using artificial intelligence to reduce and prevent crime, and to more quickly respond to crimes in progress. The ideas behind many of these projects is that crimes are relatively predictable; it just requires being able to sort through a massive volume of data to find patterns that are useful to law enforcement. This kind of data analysis was technologically impossible a few decades ago, but the hope is that recent developments in machine learning are up to the task.
There is good reason why companies and government are both interested in trying to use AI in this manner. As of 2010, the United States spent over $80 billion a year on incarations at the state, local, and federal levels. Estimates put the United States’ total spending on law enforcement at over $100 billion a year. Law enforcement and prisons make up a substantial percentage of local government budgets.
Direct government spending is only a small fraction of how crime economically impacts cities and individuals. Victims of crime can face medical bills. Additionally, high crime can reduce the value of property and force companies to spend more on security. And criminal records can significantly reduce an individual’s long-term employment prospects. University of Pennsylvania professor Aaron Chalfin did a review of the current research on the economic impact of crime and most analysis puts the cost at approximately 2% of gross domestic product in the United States.
This article will examine AI and machine learning applications in crime prevention. In the rest of the article below, we answer the following questions:
What AI crime prevention technologies exist today?
How are cities using these technologies currently?
What results (if any) have AI crime prevention technologies had thus far?
Companies are attempting to use AI in a variety of ways to address crime that this article will break down into two general categories: (a) Ways AI is being used to detect crimes, and (b) Ways AI is being used to prevent future crimes.
AI Crime Detection
City infrastructure is becoming smarter and more connected. This provides cities with sources of real time information, ranging from traditional security cameras to smart lamps, which it can use to detect crimes as they happen. With the help of AI, the data collected can be used to detect gunfire and pinpoint where the gunshots came from. Below, we cover a range of present applications:
The company ShotSpotter uses smart city infrastructure to triangulate the location of a gunshot, as they explain in this 3-minute video:
According to ShotSpotter, only about 20 percent of gunfire events are called in to 9-1-1 by individuals, and even when people do report the event they often can only provide vague or potentially inaccurate information. They claim their system can alert authorities in effectively real time with information about the type of gunfire and a location that can be as accurate as 10 feet. Multiple sound sensors pick up the sound of a gunshot and their machine learning algorithm triangulates where the shot happened by comparing data such as when each sensor heard the sound, the noise level, and how the should echoed of building.
Before ShotSpotter is launched in an area, acoustic sensors and cameras are placed all over a city. When the program goes live:
An officer, detective or other law enforcement official can log in on a computer to see a map interface.
When a shot is fired sensors near by that capture sound which, in turn, trigger the connected cameras to point in the noise’s direction.
Based on sound frequencies and volumes, the system triangulates where and between which sensors the shot took place.
On the interface, the map will move to show where a shot was detected, noting a red circle at what it has assigned as the shot’s exact location.
In a sidebar next to the map, an official can see other details like the time and number of shots fired.
The coordinates and other information can be sent immediately to an on-duty officer or fleet of officers.
A user can also access footage of the cameras which moved in the shooting’s direction.
After an incident, shot detections stay logged so a user can find the information and video for investigation purposes.
In 2011, CSG Analysis surveyed police departments in Brockton, Mass., East Palo Alto, Calif., Nassau County, NY, Richmond, Calif., Riviera Beach, Fla., Rochester, NY and Saginaw, Mich. about their use with ShotSpotter. The study found the following problem-point themes:
All departments noted that they wanted to improve response to and investigation of gun shots, telling surveyors that, in many cases, gun shots do not result in 9-1-1 getting called.
When 9-1-1 is called by a nearby person who hears a shot, police and detectives say they don’t always get pinpointed location information necessary for further investigation. The study did not note each police offices gun shot response numbers or the area city gun violence numbers before the use of StopShotter.
Most officers, analysts, detectives and other surveyed department employees said they were “confident” to “very confident” in ShotSpotter’s accuracy.
Over 71 percent of the respondent rated its value as “very high” on a high-low scale.
Respondents of the Brockton Police Department also told CSG that officers were able to see a gunshot detected through the software and report to the scene of a shooting with enough time to save victim’s life. They added that no one called 9-1-1 to report the incident.
ShotSpotter has also released case studies on how the program detected the location of campus shootings. Although they did not name the campuses, they acknowledged that one of them was on the west coast.
According to that case study, an officer heard shots fired and radioed the police department, which had already been notified by the software. According to ShotSpotter, the software determined the specific location at which 14 shots were fired. While the officer mistakenly thought the shots were fired in a closeby park, the software was able to detect that they were actually two blocks away from it, according to the case study.
The study notes that more officers were able to reach the location of the shooting with enough time to find valid evidence and question witnesses that were still in the area. They later arrested two suspects, according to ShotSpotter.
SpotShotter claims to be in use in over 90 cities including New York, Chicago, and San Diego. Most of their clients are in the United States, but last year they added Cape Town, South Africa to their list of customers. They have also been highlighted on Boston Police Department’s website.
The company had their IPO in July 2017, and their current market cap is $183 million.
Cameras and Surveillance
The following three companies claim use computer vision and other AI techniques to spot potentially criminal anomalies on real-time surveillance video.
While ShotSpotter listens for crime, many other companies are using cameras to watch for it. Last year Hikvision, a Chinese company which is a major security camera producer, announced they would be using chips from Movidius (an Intel company) to create cameras able to run deep neural networks right on board.
They say the new camera can better scan for license plates on cars, run facial recognition to search for potential criminals or missing people, and automatically detect suspicious anomalies like unattended bags in crowded venues. Hikvision claims they can now achieve 99% accuracy with their advanced visual analytics applications.
With 21.4% of the market share for for CCTV and Video Surveillance Equipment worldwide Hikvision was the number supplier for video surveillance products and solution in 2016 according to IHS.
Movidius explains the benefits of having this capacity directly built into new cameras
Their systems have been using AI to perform tasks like facial recognition, license plate reading and unattended bag detection for several years, but that video processing has traditionally taken place on a centralized hub or in the cloud. By performing the processing within the cameras themselves, the company claims they are making the process faster and cheaper. It can also reduce the need for using significant bandwidth since only relevant information needs to be transmitted.
When a surveillance officer uses the platform, they can see a live feed of video or older videos recorded with the camera.
At the time of recording, the software, which is already trained, will pick up on faces and objects, such as bags or cars.
As a feed comes through to the surveillance officer, these objects can either change in color or will have a square outline around them.
In airports, the software can film specific areas and identify when a bag is dropped and left for long enough for it to be considered abandoned.
If the AI detects the bag for a long enough time, the control room or surveillance officer will be notified through the program.
In 2005, Hikvision was selected by London officials to replace its current CCTV security system in both Hammersmith and Fulham Burroughs. Hikvision’s case study noted that the current CCTV system in place had some technical issues, limited customer support and could not record in the highest resolution needed for the mix of high commercial and public areas within each burrough.
According to the case study, released in 2013, this software has been able to detect and rout an estimated 80 percent of threatening visuals, such as violent objects, caught on camera directly to police department control rooms.
Among the successes Hikvision cites is assisting with a 65% drop in crime in Sea Point, South Africa following the introduction of their camera system. Hikvision claims it placed 42 cameras on the town’s busiest roads, driven down by visitors and residents.
Because Sea Point had noticed that there was a combination of traffic and high criminal activity on these streets, they used Hikvisions night and day computer vision cameras to detect and log every license plate that went in camera view.
Hikvision says cameras used could also shoot poor weather conditions.
This information was then fed to police department control rooms for further analysis, both on screens and through Hikvision software.
It is not clear of how long cameras were placed here before the dip in crime occurred, or how much crime occurred on these streets before the implementation.
Tel Aviv-based Cortica, founded in 2007, claims to offer city-wide security systems, claiming that its “unsupervised” AI software can “comb through” real-time footage from both surveillance cameras and drones in order to search for and alert law enforcement or town officials of detected criminal patterns. The program claims to offer these AI and computer vision features:
A user can search for an image or video by text or reverse-image search
Collecting groups of facial images relating to one event or span of time.
Analyzes physical behavior and motions of humans to determine threatening and non-threatening movement patterns.
The company also offers drone compatible software which can allow for the similar image analysis as well as geo-tagging, and directing the drone on an autonomous route.
The company claims its software can be used for traffic management, urban safety, travel security, surveillance in various facilities and monitoring public transit.
According to its site and demo video, Cortica’s software can allow a user to upload or stream video or images as they are recorded. The software learns the patterns of those images will highlight or circle anomalous objects that show up in those images. It can also be used with x-ray machines and set to detect certain shapes, like those of weapons. The user can also click to see specific images that are detected in a photo or image from a video.
The 1-minute promotional video below explains how clients, such as airports, could use Cortica’s image software:
In January 2018, India and Israel’s prime ministers announced that they were partnering with Cortica to develop a new surveillance monitoring system for their countries using Cortica’s AI software. Cortica has not released any case studies on its site.
The company’s Chief Scientist, Josh Zeevi, has studied computer science through fellowships at Harvard and MIT and received a PhD from University of California in Berkley. He is also a professor of electrical engineering at Technion – The Israeli Institute of Technology. According to a study from CB Insights, Cortica is one of the US leaders in AI-related patterns, holding 38 granted patterns.
Criminal Behavior Detection – Cloud Walk
The Chinese facial recognition company Cloud Walk Technology is trying to actually predict if an individual will commit a crime before it happens. The company plans to us facial recognition and gait analysis technology help the government use advanced AI to find and track individuals.
The system will detect if there are any suspicious changes in their behavior or unusual movements. For example if an individual seems to be walking back and forth in a certain area over and over indicating they might be a pickpocket or casing the area for a future crime. It will also track individual over time.
The company told the FT, “Of course, if someone buys a kitchen knife that’s OK, but if the person also buys a sack and a hammer later, that person is becoming suspicious.”
No case studies or video demonstrations on this application within law enforcement could be found.
AI the Search for Missing Children
While Intel is known for providing a variety of tech and AI solutions, it also claims to have furnished an organization with AI technology specifically created to find missing children. In an interview, Intel spoke with Intel’s Chief Data Scientist, Bob Rogers, about his missing-person initiative.
According to Rogers,
Intel is helping the non-profit, National Center for Missing and Exploited Children (NCMEC), which takes in, prioritizes, organizes and investigates tips and reports for the FBI and other law enforcement agencies.
This organization had to sort through over 8.2 million tips which came by phone, text, email and other online portals.
Tips needed to be prioritized and reviewed by a team of only 25 analysts.
Some of these tips were photographic, such as those relating to hashing, or the circulation of child pornography online.
The analyst and Intel scientists worked together to “modernize” the non-profit’s existing infrastructure and sorting system, while also building them a platform that could allow them to easily examine and further organize data. Along with a platform that categorizes, organizes and prioritizes tips, the software can also use computer vision to recognize and show connections between faces, or red flag high-priority images that are submitted as tips.
Intel did not include a demo of the platform they upgraded, but this flowchart from the company below notes how they claim the process works.
Intel Missing Child Program Explanation
Intel has not disclosed other clients that it has worked with on this specific topic. Because this was unveiled in mid-2017. results directly linked with the application have not yet been noted by Intel or the media.
AI for Crime Prediction and Prevention
The goal of any society shouldn’t be to just catch criminals but to prevent crimes from happening in the first place, and in the examples below, we’ll explore how this might be achieved with artificial intelligence.
Predicting Future Crime Spots – Predpol
One company using big data and machine learning to try to predict when and where crime will take place is Predpol. They claim that by analyzing existing data on past crimes they can predict when and where new crimes are most likely to occur. Currently their system is being in several American cities including Los Angeles, which was an early adopter.
In this video Predpol co-founder explains how their system works.
Their algorithm is based around the observation that certain crime types tend to cluster in time and space. By using historical data and observing where recent crimes took place they claim they can predict where future crimes will likely happen.
For example a rash of burglaries in one area could correlated with more burglaries in surrounding areas in the near future. They call this technique real-time epidemic-type aftershock sequence crime forecasting. Their system highlights possible hotspots on a map the police should consider patrolling more heavily.
PredPol screen shot
A screen shot of PredPol’s map application. Source – PredPol.com
One success the company highlights is Tacoma, Washington, which saw a 22 percent drop in residential burglaries within two years after adopting the system in January of 2013.
Tacoma implemented the system as part of its Burglary Reduction Initiative, which was established in an effort to make Tacoma one of the safest-ranked cities in the United States.
While 2013 burglary rates were not noted in the story, the police department said the 22 percent drop exceeded their initial goal of decreasing the crime by 7.5 percent.
Given that crime is such a complex issue with numerous causes, it is very difficult to isolate the impact any one tool has. However, one study by researchers at Predpol concluded that police patrols based on near real-time epidemic-type aftershock sequence crime forecasting (what Predpol uses) results in a 7.4 percent reduction in crime volume.
Pretrial Release and Parole – Hart
After being charged with a crime, most individuals are released until they actually stand trial. In the past deciding who should be released pretrial or what an individual’s bail should be set at is mainly now done by judges using their best judgement. In just a few minutes, judges had to attempt to determine if someone is a flight risk, a serious danger to society, or at risk to harm a witness if released. It is an imperfect system open to bias.
The city of Durham, in the United Kingdom, is using AI to improve on the current system deciding to release a suspect. The program they’ve commissioned, called the Harm Assessment Risk Tool (Hart), was fed five years worth of Durham Police Department criminal data from 2008-2012. From there, Hart’s predictive algorithms will attempt to predict if an individual is at low, medium or high risk of committing a crime.
The city has been testing the system since 2013 and comparing it’s estimates to real world results. The city claims Hart’s predictions that an individual would be low risk were accurate 98 percent of the time, and predictions that an individual would be high risk were accurate 88 percent.
Because this is a city-funded project, there is no other report of Hart clients or case studies. We also could not find a video demonstration of the program developed.
Crime Recidivism Prediction – COMPAS
Jurisdictions in the United States have been using more basic risk assessment algorithms for over a decade to make decisions about pretrial release and whether or not to give an individual parole. One of the most popular is Correctional Offender Management Profiling for Alternative Sanctions (COMPAS) from Equivant, which is used throughout all Wisconsin and numerous other locations. A 2012 analysis by the New York Division of Criminal Justice Services found COMPAS’, “Recidivism Scale worked effectively and achieved satisfactory predictive accuracy.”
COMPAS has recently come under fire after a ProPublica investigation. The media organization’s analysis indicated the system might indirectly contain a strong racial bias. They found, “[T]hat black defendants who did not recidivate over a two-year period were nearly twice as likely to be misclassified as higher risk compared to their white counterparts (45 percent vs. 23 percent).”
ProPublica’s coverage of COMPAS was overtly critical (this image is a screen shot from ProPublica)
The report raises the question of whether better AI can eventually produce more accurate predictions or if it would reinforce existing problems. Any system will be based off of real world data, but if the real world data is generated by biased police officers, it can make the AI biased.
A 2018 study from the journal Science Advances suggested in its abstract that this software “no more accurate or fair than predictions made by people with little or no criminal justice experience.”
When splitting up 1000 older case files between a group of 20 human participants, the publication reported that average prediction accuracy was roughly 62 percent. The study claims that the same 1000 case files were fed in into COMPAS, which showed 65.2 percent prediction accuracy. The study did not acknowledge how old the case files were or if any of the individuals noted in them were alive or deceased.
Concluding Thoughts and Future Outlook
The ability of AI to allow governments to collect, track, and analyze data for the purpose of policing does raise some serious questions about privacy and the threat that machine learning could create a feedback loop that reinforces institutional bias. This article wasn’t dedicated to these important issues but the AI Now Institute at New York University is a research center dedicated to understanding the social implications of artificial intelligence which can provide more details about these concerns.
While civil liberty concerns do exist, they so far have not stopped the spread of AI technology in surveillance and crime prediction. According to IHS, there were 245 million professionally installed video surveillance cameras operating in 2014 and the number of security cameras in North American effectively doubled from 2012 to 2016. There is more and more data being fed to security and law enforcement agencies; it is only natural they are going to want to keep investing in more and more AI tools to shift through this ever-growing stream of data.
The use of AI and machine learning to detect crime via sound or cameras currently exists, is proven to work, and expected to continue to expand. The use of AI in predicting crimes or an individual’s likelihood for committing a crime has promise but is still more of an unknown. The biggest challenge will probably be “proving” to politicians that it works. When a system is designed to stop something from happening, it is difficult to prove the negative.
Companies that are directly involved in providing governments with AI tools to monitor areas or predict crime will likely benefit from a positive feedback loop. Improvements in crime prevention technology will likely spur increased total spending on these technology.
PEW Research crime reduction
From PEW Research “5 facts about crime in the U.S.” from February 21, 2018
While effectively all categories of crime have been trending down for decades, in major American the share of general funds being spent on law enforcement has grown steadily. In American politics, there remains strong emphasis on law enforcement. It seems that the drop in crime has possibly even created a feedback loop. Instead of a lower crime rate being seen as a reason to cut police services, it is seen as proof that law enforcement is working so therefore deserves more money.
After all, a lower crime rate has broad social benefits for a community and real political benefit for the local elected officials responsible for budgeting. In New York City both liberal mayors like Bill De Blasio and conservative mayors like Rudy Giuliani heavily citing the drop in crime under their tenure during their re-election campaigns.
Most of these technologies which are or were mainly developed with government clients in mind have spillover benefits for private companies. The same AI security cameras used by government are also being used by private companies to protect their assets. Technology used to predict crime or automatically catch suspicious behavior can help companies with loss prevention or deciding where establish new locations.
Birmingham, Bath, Blackburn, Bradford, Bolton, Chester, Coventry, Crewe, Derby, Blackpool, Huddersfield, Harrogate , Hull, Halifax, Liverpool, Lancaster, Leicester, Lincoln, Leeds, Manchester, Milton Keynes, Motherwell, Newcastle upon Tyne, Nottingham, Northampton, Oldham, Peterborough, Preston, Sheffield, Stockport, Slough, Sutton, Sunderland, Southend-on-Sea, Stoke-on-Trent, Cleveland, Teesside, Warrington, Wakefield, Wigan, York.