The listings featured on this site are from companies from which this site receives compensation. This influences: Appearance, order, and manner in which these listings are presented.
Our videos have over 5 million views on Youtube! Visit our channel now »
Disclosure:
Professional Reviews

vpnMentor contains reviews that are written by our community reviewers. These take into consideration the reviewers’ independent and professional examination of the products/services.

Ownership

vpnMentor was established in 2014 as an independent site reviewing VPN services and covering privacy-related stories. Today, our team of hundreds of cybersecurity researchers, writers, and editors continues to help readers fight for their online freedom in partnership with Kape Technologies PLC, which also owns the following products: ExpressVPN, CyberGhost, ZenMate, Private Internet Access, and Intego, which may be reviewed on this website.

Affiliate Commissions Advertising

vpnMentor contains reviews that follow the strict reviewing standards, including ethical standards, that we have adopted. Such standards require that each review will take into consideration the independent, honest and professional examination of the reviewer. That being said, we may earn a commission when a user completes an action using our links, at no additional cost to them. On listicle pages, we rank vendors based on a system that prioritizes the reviewer’s examination of each service, but also considers feedback received from our readers and our commercial agreements with providers.

Reviews Guidelines

The reviews published on vpnMentor are written by community reviewers that examine the products according to our strict reviewing standards. Such standards ensure that each review prioritizes the independent, professional and honest examination of the reviewer, and takes into account the technical capabilities and qualities of the product together with its commercial value for users. The rankings we publish may also take into consideration the affiliate commissions we earn for purchases through links on our website.

Safeguard Your Kids Online with SafeToNet Behavioral Analytics App

Ditsa Keren Updated on 2nd July 2023 Technology Researcher

SafeToNet is a cyber safety company that safeguards children from bullying, sextortion and abuse on social networks and messaging apps.   The SafeToNet app is built on an AI environment that can textualise the messages kids receive, figure out what's harmful, and filter it before the damage is done. It's a deep tech, multi-faceted solution that goes way beyond AI behavioral analytics. It analyses changes in the behavior of children and notifies parents when a suspicious change is detected. We've spoken to CEO Richard Pursey to hear about the app abilities, and how it revolutionizes child safeguarding online. 

How does the SafeToNet app work?

We've been training our algorithm to understand behavioral patterns, every minute, every day.

Parents download the app to the child's device, and pair it to their own device.

The app will never show a parent what the child is sending or receiving, as we believe that children too have the right for privacy. They can, however, manage the risks.

For instance, the software might say the child is being bullied on Instagram, and guide and advise the parent on how to talk to the child about the risk.  Part of this recommendation might be to disable Instagram for a while, but content filtering is only a part of it. The rest is about giving parents appropriate advice on how to respond to such events.

Research shows that when it comes to social media, children know a lot more than their parents, and that's one of the biggest worries of parents. The software teaches them about the threat as it occurs in real time, and guides them through to the solution.

Can this solution be utilized in an organisation environment?

Yes. Our brand is also registered as SafeToWork, which is all about behavioral analytics, a technology that could easily be adapted to the workplace.

We're working with a global brand that had a situation. A member of staff was using Facebook on his phone, and opened a pornographic message just as somebody was walking behind him. That person was reported and lost his job.

You could rightly argue that a business has the right to defend its brand, but employees also have rights, so who would you favor?

People spend so much time on social networks, they could be placing their company in danger. In this particular case, the company has given phones to all its employees, so they felt they had a right to know what they are doing. However, saying that such a product would be unpopular is an understatement.

How is it different when discussing child protection?

Child suicide rates are going up year by year, much due to cyber bullying and other imposed risks that kids and their parents are not aware of.

We read a lot about abuse and aggression online. In the UK, we're reaching a point where people have had enough. It's like we're living in a social experiment; nobody knows where it will end up, but there are so many problems and they go beyond ransomware.

The internet isn’t regulated; people say what they like because they get a feeling of anonymity. We all have a duty to do something about it, but how do you know what your child is doing? Most reasonable people would say something has to be done.

What can you tell us about apps like the Blue Whale?

Sadly, the Blue whale didn't shock or surprise me, because it's only one of many similar apps that endanger children's life and messes with their minds.

Children download these apps because they think it's cool to take risks. There are a number of apps that you can get yourself into trouble with.

Getting drunk and then being seen online is now a trendy thing, but that's only the tip of the iceberg. People can hurt themselves and others or even die, just to get some attention online, there are more and more apps that encourage such behavior.

With over 5 million apps out there, how could we possibly have an idea of which ones are safe for your children and which ones aren’t?

A software like SafeToNet is vital. We can advise parents when these trends exist, to keep them aware and alert.

There's a Peppa Pig video on YouTube where Peppa takes a knife and cuts her own head off. Similar videos with Blaze and the Monster Machines and other popular cartoons are being seen millions of times before YouTube removes them.

We can't know every risk, so we use.

By building communities of collaborative safeguarding, parents can warn each other and keep their children safer. Our software can filter URL’s and apps fairly quickly after they are reported, as well as inform large volumes of parents. We do this much quicker than google, because we have no commercial benefit from displaying them.

Is there any regulation around safeguarding children online?

No, it's mainly a self-imposed regulation. Certain apps set age limits, but those are easy to bypass. Also, there are many apps that endanger kids not because of the content they deliver, but because of the people who use them. What children do on those apps and who they interact with could vary greatly from one child to another.

Facebook openly admits it has over 270 million "undesirable users" on their network. In their terminology, which is not clearly defined, it means either fake or duplicate accounts. That's 1 in 10 users, so if kids have an average of 300 people on their friends list, 30 of them could be fake identities.

Sextortion is a huge global issue. Kids are sending images of themselves to people they have never met face to face. Our software is designed to identify that using the multi-faceted tools we deploy.

For instance, you can install our own keyboard on your child's phone to detect changes in his or her behavior. You'll be surprised to see how many things can be determined just by the speed of typing.

In a more familiar relationship, children typically type without much hesitation, not giving it much thought. However, when interacting with others, they tend to be more cautious and deliberate in selecting their words.

With the ability to pattern the language and emoji's being used, and the position of emoji's, you can start to determine changes in behavioral patterns.

If I'm normally online on certain hours, if I suddenly use a more aggressive language or respond more quickly, using more words or less words, there may be a strong likelihood of aggression. It seems to be ok to be abusive online.

In particular, if I started calling you names, you're likely to change your behavior pattern. You might go quiet or raise your voice; those are the kind of changes we look for. Typically, they become punchy and use fewer words. That's how we trained our software to detect differences in mood, and block content before a child is hurt.

What languages does SafeToNet support?

At the moment, SafeToNet is only supported in English. Although translating the software is possible, teaching it to identify semantics in different languages could be very tricky.

What you might find offensive may not be so for me, because we come from different cultures. I might swear a lot, you might not. With our software, parents can allow a certain level of profanity, or filter it out completely.  So to translate our software we would need to teach it to read between the lines, identify the subtle differences in how people communicate, and identify what's ok and what's not.

What changes can we expect to see in the near future with regards to child safeguarding online?

I could give you lots of different opinions, but there is one that surpasses them all: something has to be done. Many parents are talking about how the large corporations should act, but they aren’t doing anything about it.

The way I see it, responsibility will slowly shift towards the parents.

If I go into my car, I have to put a seat belt on because it’s the law, but also because it's safer.

Similarly, the whole landscape of online safeguarding will move away from laming "the system" towards taking personal responsibility.

It is beyond me that newborns are given iPads, I think there will be much greater recognition on the safety of it and the damages it can do to such young children. In the future, parents will never give their child a phone unless it is safeguarded, if that doesn’t happen, the social experiment will end up badly.

Food packaging has warnings about the health risks, but there are no warnings for mobile devices and apps. Child data privacy will become a standard part of life in the future. If not, who knows where we'll end up?

About the Author

Ditsa Keren is a cybersecurity expert with a keen interest in technology and digital privacy.

Did you like this article? Rate it!
I hated it! I don't really like it It was ok Pretty good! Loved it!
out of 10 - Voted by users
Thank you for your feedback