Payment Fraud Integration

Manual Reviews

On this page:

One of the ways that your payment fraud model can receive feedback is via manual reviews (another way is through Disputes).

Manual reviews are an explicit intervention by an analyst or member of your team to mark a customer as genuine or fraudulent regardless of the Ravelin action.

Machine learning is based on the idea that an algorithm can teach itself to automatically identify patterns in significant volumes of data. It uses statistical techniques to make highly accurate predictions about events in the real world.

With support and feedback from human experts, such as fraud analysts, a machine learning model can be continuously enhanced and optimised to tackle big problems. Algorithms like the one we use at Ravelin can become even more accurate at making predictions as they consume more data and receive feedback on previous forecasts.

The Benefits of Manual Reviews

There are two main reasons why doing manual reviews can be beneficial.

  1. Accurate manual reviews can help Ravelin to optimise your machine learning model. This means the model can get even better at making predictions on whether or not a customer is genuine.

  2. Manual reviews can also be used as a tool to catch more fraud in real time. Practically this means manual reviews can be used to hamper attacks as they happen. You can use manual reviews to help identify fraud hotspots or target networks via Connect.

The Impact of Reviews

Reviewing Customers as Genuine

If you review a customer as genuine then Ravelin will always return ALLOW for that customer, unless they incur an unforgiven dispute after being reviewed as genuine. Should they not yet have an unforgiven dispute they will always be able to transact, however high their machine learning score, however strongly connected to fraud networks. Any rules that you configure will not change this.

Manual reviews stay active for customers and can only be overturned by another review. If you mark someone as genuine, they will be treated as such and allowed to transact for all future orders until they incur a dispute.

You can configure additional options for manual reviews if needed - you can read more in our Help Centre.

We advise you to be very cautious when adding a genuine manual review - only do so when you are certain they are genuine and when it is necessary to do so. For example, if the customer is very important and you cannot risk losing their business because of an incorrect fraud block. Or if a customer has complained and you have made additional checks. But beware as some fraudsters call up and demand to be allowed to transact. Always make additional checks to confirm who they are.

Remember, genuine manual reviews will minimise similar customers being prevented by your machine learning model.

If you do review someone as genuine, then please do leave a comment on the customer explaining why. Comments can be very helpful to Ravelin’s machine learning engineers as they try to improve the system. And they’ll help the next person from your company who looks at that customer record.

Reviewing Customers as Fraudsters

If you mark someone as a fraudster, they will be marked as such for all future orders and prevented from transacting.

Reviewing as a fraudster carries somewhat less risk than reviewing as genuine. However, potential future payments from that customer will be lost, and you may also block other customers connected to that customer in the network. You should still use caution and care when reviewing as a fraudster. As well as any losses to your company, incorrect fraud reviews may eventually cause damage to your machine learning model.

Remember, manual reviews stay active for customers and can only be overturned by another review. That customer will never be able to transact again unless you remove the review. No rules you set will change this (Ravelin personnel are able to install rules that do change this - contact us if you have a need for this).

Please note, fraudster manual reviews will maximise similar customers being prevented by your machine learning model.

As with genuine reviews, please leave comments with your fraud reviews. These are invaluable to Ravelin, to other people at your company and potentially to you at a later date. Explain why you reviewed this customer as a fraudster and any additional evidence you might have found.

Machine Learning Model

Manual reviews feed into our machine learning model so will have an impact on which customers we allow and prevent in the future via the Ravelin score. Similar customers may also be prevented or allowed based on reviews.

Network

Any customer marked as a fraudster will also have an impact on their network via hops to fraud. For example, if one customer is marked as a fraudster in a network and is within x hops of another customer in the same network - the second customer will be automatically blocked.

Hops to fraud is configurable so will differ from company to company - contact support@ravelin.com if you have any questions about how hops to fraud is set up for you.

Recommendations on Reviewing

Who you should review depends on what impact you want your reviews to have.

Reviewing to Optimise Your Model

If you would like to do reviews with the goal of optimising your machine learning model, you should review customers that you are very confident are either people we prevented but who are obviously genuine (false positives) or people we allowed who are obviously fraudulent (false negatives).

Reviewing as a Tool to Catch More Fraud

If you would like to use manual reviews as a tool to catch more fraud in real time, you can use them to help identify fraud hotspots or target networks. For example, if there is an increase of reviewed fraudsters on a given street, everyone else that tries to order there is scored higher because of larger location contributions.

You can also use reviews to prevent connected customers through hops to fraud. For example, if you review someone in a network as a fraudster then any new accounts that are linked to them, or any newly registered customers that become linked to them will be prevented because of the hops to fraud rule.

In order to use reviews to catch more fraud in real time, you could focus on reviewing customers that are newly registered. Customers creating a lot of accounts tend to be in the same location so one manual review on a new account can help to stop other fake accounts that share that location from making orders.

To target accounts in networks, you can review customers in fast growing networks or networks that you think don’t look right. You can view your fastest growing networks via the Discover tab on Connect.

Beware of reviewing based on untested theories. For example, one common misconception is that an email address with a lot of numbers is a sign of a customer that is likely to commit fraud. The data does not appear to support this conclusion. If you do have a theory about patterns that fraudsters follow then please let us know so we can evaluate it and consider it for inclusion in our machine learning models.

Who NOT to review

Customers you are not sure about. If you are not sure how to review someone, then we suggest that you don’t review them at all.

You should not review customers who look genuine and have a very low score or who look fraudulent and have a very high score - these have been correctly classified by the model so there is no need to review them.

You can tag customers, add them to watchlists or leave a comment if you’d like to keep an eye on suspicious accounts. Whilst manual reviews can be beneficial, incorrect reviews can potentially damage the accuracy of the model.

Number of manual reviews required

There is no required number of manual reviews. It is far more important to have accurate reviews than it is to have high number of reviews. If you want to do manual reviews, we suggest only doing ones you are confident about even if that means doing less.

You do not have to do manual reviews. Ravelin can rely solely on Disputes to provide feedback for the machine learning model, though this may mean your model is not as accurate as it could be.

Next steps

Test your payment fraud integration

Ensure you’re handling errors correctly

Get ready to go live with payment fraud recommendations

Feedback

© Ravelin Technology Ltd. All rights reserved