Features as a Service allows your in-house data scientists and risk teams to benefit from Ravelin’s data insights and infrastructure, by providing machine learning features as an API response. These machine learning features can then be used as an input to your internal risk models.
We support returning machine learning features on our Checkout and Connect endpoints. We do not support this on any other endpoints.
Features as a Service needs to be enabled on your account before it can be used. Please speak to your account manager about enabling Features as a Service. Features as a Service can also be purchased as a standalone product.
Ravelin uses an extensive list of machine learning features to power our models, and with Features as a Service we make the majority of these machine learning features available to you.
This includes machine learning features derived from our Connect graph database and our consortium dataset.
As part of your integration, Ravelin will share a list of machine learning features with you. You can choose the machine learning features you would like returned in the API response.
In order for us to calculate machine learning features, you should to send data to our Checkout endpoint.
To indicate that you would like the machine learning features returned in the response, you should add the query parameter
features=true
to the URL. We will return the machine learning features in a features
JSON object in the response.
Please note, it is not possible to request machine learning features and a recommendation at the same time.
An example request, for a customer ordering a pizza from their phone, is shown below:
POST https://api.ravelin.com/v2/checkout&features=true HTTP/1.1
Authorization: token ...
Content-Type: application/json
{
"timestamp": 1512828988826,
"customer": {
"customerId": "61283761287361",
"registrationTime": 1512828988826,
"email": "jsmith123@example.com",
"emailVerifiedTime": 1512828988826,
"familyName": "Smith",
"givenName": "John",
"telephone": "+447000000001",
"telephoneVerifiedTime": 1512828988826,
"telephoneCountry": "GBR"
},
"device": {
"deviceId": "65fc5ac0-2ba3-4a3b-aa5e-f5a77b845260",
"type": "phone",
"manufacturer": "google",
"model": "Pixel XL",
"os": "android",
"language": "en-US",
"ipAddress": "81.152.92.84"
},
"order": {
"orderId": "abcde12345-ZXY",
"creationTime": 1512828988826,
"price": 1500,
"currency": "GBP",
"market": "emea",
"country": "GBR",
"marketCity": "london",
"from": {
"street1": "1 Main Street",
"city": "London",
"country": "GBR"
},
"to": {
"street1": "72 High Street",
"city": "London",
"country": "GBR"
},
"items": [
{
"sku": "0001",
"name": "Margherita Pizza",
"quantity": 1,
"price": 1500
}
],
"status": {
"stage": "pending",
"actor": "merchant"
}
},
"paymentMethods": [
{
"paymentMethodId": "pm-abc123",
"instrumentId": "fp_abc123",
"methodType": "card",
"scheme": "visa",
"cardBin": "535522",
"cardLastFour": "0001",
"expiryMonth": 7,
"expiryYear": 2020,
"nameOnCard": "John Smith",
"billingAddress": {
"addresseeName": "John Smith",
"street1": "123 High Street",
"city": "London",
"country": "GBR",
"postalCode": "E1 1AA"
}
}
],
"transactions": [
{
"transactionId": "123-abc-XYZ",
"paymentMethodId": "pm-abc123",
"time": 1512828988826,
"amount": 1500,
"currency": "GBP",
"type": "auth",
"gateway": "example-gateway"
}
]
}
An example response is shown below:
{
"status": 200,
"timestamp": 1512828988826,
"data": {
"customerId": "61283761287361",
"features": {
"customer": {
"emailLength": 21,
"paymentMethodsRegisteredLastMonth": 5,
"minutesSinceRegistration": 3556893,
"minutesFromRegisterToOrder": 2427,
"cancelledOrderCountLastMonth": 2,
"successfulOrderCountLastMonth": 1,
"transactionsByShippingAddressLastWeek": 3,
"paymentMethodCountryMatches": 2,
"paymentMethodRegisteredVelocityCount24h": 3,
"addressFraudScore": 0.211,
"emailFraudScore": 0.0371,
"asnFraudScore": 0.373,
"emailDomainFraudScore": 0.278,
"binFraudScore": 0.01,
"transactionDeclineCodeFraudScore": 0.01,
"ipAddressFraudScore": 0.01,
"cardIssuerCountryFraudScore": 0.01,
"paymentMethodTypeFraudScore": 0.01,
"deviceDegreeMean": 1.472,
"deviceDegreeMax": 3,
"edgeGeneralMeanAge": 1800,
"edgeGeneralGrowthRate": 4,
"edgeGeneralCount": 5,
"edgeLocalMeanAge": 720,
"edgeLocalGrowthRate": 3,
"orderStartAddressLat": -117.662,
"orderStartAddressLong": -117.662,
"orderStartAddressNormalised": "1 main street london gbr",
"orderEndAddressLat": -117.662,
"orderEndAddressLong": -117.662,
"orderEndAddressGeohash": "w21zd2mkt",
"orderEndAddressNormalised": "72 high street london gbr",
"deviceType": "phone",
"deviceModel": "Pixel XL",
"deviceOS": "android",
"minutesSinceFirstSeen": "2022-11-21T03:39:30Z",
"isRootedOrJailbroken": true
}
}
}
}
If you want to make use of machine learning features that rely on dispute data, you will need to send data to our Dispute endpoint. We don’t return machine learning features in the Dispute endpoint response, however we will take this dispute data into account when calculating the machine learning features for the requests to our Checkout endpoint.
Some machine learning features may require additional integration work in order to function correctly. For example, you may need to use Ravelin’s device intelligence libraries. This will be discussed with you as part of the integration process.
Ravelin’s Connect graph database allows you to create a connected network of your customers using common attributes such as emails, phone numbers, device IDs and payment methods. You can read more in the Connect guide.
If you are only interested in getting machine learning features that rely on our graph database,
you can retrieve machine learning features centred around a target customer by adding the query parameter features=true
to the requests,
or by sending a separate GET request to /v2/connect/customers/:customerID
.
The machine learning features relate to:
Each machine learning feature is calculated within depth hops of the target node. Counting stops when a chargeback or reviewed fraudster is reached.
An example GET
request for a specific customer’s Connect machine learning features is shown below.
The features=true
query parameter is not needed on this request.
GET https://api.ravelin.com/v2/connect/customers/61283761287361 HTTP/1.1
Authorization: token ...
Content-Type: application/json
An example Connect request linking a customer to a payment method, is shown below.
To indicate that you would like the Connect machine learning features returned in the response, you should add the query parameter
features=true
to the URL.
POST https://api.ravelin.com/v2/connect?features=true HTTP/1.1
Authorization: token ...
Content-Type: application/json
{
"timestamp": 1512828988826,
"customer": {
"customerId": "61283761287361"
},
"paymentMethods": [
{
"card": {
"paymentMethodId": "pm-abc123",
"instrumentId": "fp_abc123"
}
}
]
}
The response to both these Connect requests is in the same format.
An example response is shown below:
{
"timestamp": 1512828988826,
"customerID": "61283761287361",
"count": 60,
"customerCount": 5,
"cardCount": 4,
"chargebackCount": 1,
"reviewedFraudsterCount": 1,
"reviewedGenuineCount": 0,
"emailCount": 4,
"phoneCount": 3,
"deviceCount": 4,
"hopsToFraud": 3,
"customerDegreeMin": 2,
"customerDegreeMean": 3.5,
"customerDegreeMax": 4,
"cardDegreeMin": 1,
"cardDegreeMean": 1.2,
"cardDegreeMax": 2,
"emailDegreeMin": 1,
"emailDegreeMean": 1.3,
"emailDegreeMax": 3,
"phoneDegreeMin": 1,
"phoneDegreeMean": 1.1,
"phoneDegreeMax": 2,
"deviceDegreeMin": 1,
"deviceDegreeMean": 1.472,
"deviceDegreeMax": 3,
"edgeGeneralMeanAge": 1800,
"edgeGeneralGrowthRate": 4,
"edgeGeneralCount": 5,
"edgeLocalMeanAge": 720,
"edgeLocalGrowthRate": 3,
"meanDegree": 4.7,
"edgeLocalCount": 1,
"tags": [
{
"tagName": "VIP",
"depth": 7
},
{
"tagName": "Suspected fraud",
"depth": 3
}
]
}
There are a number of limits in place when using our graph database. See the Connect guide for more details.
After your live integration is working, and you are using features deterministically, we recommend providing historical data for Connect. It is extremely useful as it allows Connect to be seeded with historical connections.
The same API is available under the path /v2/backfill/connect
for batch upload of historical
data. This API has a higher, separate rate limit to /v2/connect
so that you do not interrupt your production
system’s use of Connect. The additions are processed asynchronously.
If you cannot reasonably upload historical data to these endpoints, please discuss with your account manager: we can accept data in other formats (CSV, TSV) and process it using our own infrastructure.
If you are planning on using our Connect Features in your own machine learning system, and you have provided historical data or have data in Connect already, we can produce a file of historical features that correspond to the responses that Connect would have given, had you been using it in the past.
These can then be joined onto your historical training data for models, to ensure you have coverage throughout history.
Please speak to your account manager to arrange this.
Was this page helpful?