Artificial Intelligence
Artificial Intelligence (A.I.), once the stuff of science fiction, is becoming increasingly pervasive in our day to day lives. From speeding up the automated machinery in vehicle assembly lines to asking Siri on your apple iPhone to call Hubby and remind him, for the 10th time, to buy fat free and not full cream milk on his way home, A.I. is being used to help us perform better, faster and more accurately every day.
Of course, it didn’t take long for A.I. to also be used in the analysis of video. Video Analytics have long been used to try and detect people on a video stream, but most of the older rules-based analytics produced poor quality, inconsistent results. However, the alternative was having thousands of cameras streaming onto hundreds of monitors in a control room, making it very difficult to spot and monitor suspicious activity from any individual camera. Something needed to be done, and A.I. technology has filled this need with aplomb.
A.I. can now be used by professional monitoring stations such as OmniVision’s to analyse the video streams of all connected cameras and perform tasks such as vehicle detection, person detection, facial recognition, traffic counting, people counting and license plate recognition (LPR), to name just a few. This makes it possible for OmniVision Security to connect to thousands of cameras simultaneously, and yet inexpensively and effectively highlight to a human operator whenever there is unwanted activity taking place on any one of those cameras. It’s no longer science fiction, but fact.
To understand how this is possible, let’s take a step back.
What is “Video Analytics”?
Analytic technology processes a video feed using special algorithms to perform, in OmniVision’s case, security relate functions. Two of the most common types of algorithms are:
- Fixed algorithm analytics,
- Artificial Intelligence learning algorithms (includes facial recognition).
These two types of analytics try to achieve the same result, which is to calculate if there is unwanted or suspicious behavior occurring in the field of view of a CCTV camera. However, the difference lies in how they do so. As the name implies, fixed algorithms use mathematical formulas designed to perform specific tasks and look for specific behaviors. The algorithm might be programmed, for example, to analyse the shading or contrast within an image, pick out the ‘edges’ it finds, and then look for the ‘typical’ head and shoulders pattern, or swinging arms connected to a torso. If it finds the shape it’s looking for, it will raise an alert to say “there is a human on the image!”. This is then highlighted to the control room operator, who reviews the highlighted activity and decides what further action to take.
Historically, this type of algorithm has been used to look for:
- Objects, vehicles or people crossing a virtual boundary.
- Objects, vehicles or people moving in the wrong direction.
- Leaving an article
- Picking up an article
- Loitering
These algorithms are “fixed” because, once the algorithm is written, it does not change. There is no mechanism for the operator to feed back to the algorithm that it made a mistake. So if it comes out the box thinking that a bird spreading its wings is a human head-and-shoulders pattern, this is never going to change and the control room will forever receive false alerts whenever a bird spreads its wings on camera. And this is why A.I. has revolutionized the use of cameras for security.
What are learning algorithms?
These systems work differently to fixed algorithms as they are not set to look for specific movements or shapes. They use advanced statistical modelling to calculate percentage probabilities rather than using fixed scripts to decide whether an alert is warranted or not. You could say that learning algorithms write their own script. The A.I.-based learning algorithm, based on hundreds of training sessions with an operator or with the system’s developer, is essentially taught what a person looks like, in all the different forms a person can take – wearing a cap, or not wearing a cap.. wearing a coat, or holding a briefcase.. a good A.I. system will still recognize that the coat or cap is being worn by a human, just like you would! In some cases, the system can even be taught to recognize what normal activities take place on a camera throughout the day. The system will then trigger an alert to OmniVision’s operators if it picks up activity that has not been seen before or which might not be consistent with what was seen that time of day, for that day of the week, on that camera.
This is obviously very powerful technology when used to reduce crime.
Artificial Intelligence assists the operators at OmniVision to give their full required focus on each event or suspicious activity. It allows each activity to be processed individually with its own pre-agreed protocol, allowing the first responders to be more proactive when responding to an event on-site. And it allows a relatively small number of operators to accurately monitor the activity taking pace on thousands of cameras simultaneously… which, in turn, results in uncompromising, cost-effective monitoring packages for OmniVision’s clients.. What a win!