NAV Navbar
shell python

Introduction

The Sentropy API identifies abusive text posted by users in online communities. The API currently scores text strings for specific classes of abuse that can be used to monitor, categorize, investigate, and moderate user generated content.

Authorization

To authorize, use this code:

# Include the Authorization header with each request
curl "https://api.sentropy.io/v1" \
  -H "Authorization: Bearer $TOKEN" \
  [...]
import requests

requests.post(
    "https://api.sentropy.io/v1",
    headers={"Authorization": "Bearer $TOKEN"},
    [...]
)

Make sure to replace $TOKEN with your API key.

Authorization to the API is performed using HTTP Bearer Auth. To obtain a bearer token, please reach out to apiaccess@sentropy.io.

Provide your bearer token on all API requests as a header, as in the following:

"Authorization: Bearer $TOKEN"

All API requests must be made over HTTPS. Requests made over HTTP or without a bearer token header will fail.

Get Abuse Classes

Single Message Api

This endpoint returns the probability of each class of abuse in the request.

curl "https://api.sentropy.io/v1" \
  -H "Authorization: Bearer $TOKEN" \
  -d '{
    "id": "4m8aw",
    "author": "user_123",
    "segment": "gaming_chat"
    "text": "rl go to their offices and shoot everyone"
  }'
import os
import requests

request_data = {
    "id": "4m8aw",
    "author": "user_123",
    "segment": "gaming_chat",
    "text": "rl go to their offices and shoot everyone"
}

TOKEN = os.environ["TOKEN"]
requests.post(
    "https://api.sentropy.io/v1",
    headers={"Authorization": f"Bearer {TOKEN}"},
    data=request_data,
)

The above request will receive a JSON response payload similar to:

{
  "id": "4m8aw",
  "judgements": {
    "IDENTITY_ATTACK": {
      "score": 0.04475459083914757
    },
    "IDENTITY_ATTACK/ABILITY": {
      "score": null
    },
    "IDENTITY_ATTACK/AGE": {
      "score": null
    },
    "IDENTITY_ATTACK/ETHNICITY": {
      "score": null
    },
    "IDENTITY_ATTACK/GENDER": {
      "score": null
    },
    "IDENTITY_ATTACK/POLITICAL_GROUP": {
      "score": null
    },
    "IDENTITY_ATTACK/RELIGION": {
      "score": null
    },
    "IDENTITY_ATTACK/SEXUAL_ORIENTATION": {
      "score": null
    },
    "INSULT": {
      "score": 0.0002767886617220938
    },
    "PHYSICAL_VIOLENCE": {
      "score": 0.9987996816635132
    },
    "SELF_HARM": {
      "score": 0.00015191955026239157
    },
    "SEXUAL_AGGRESSION": {
      "score": 3.632059815572575e-05
    },
    "WHITE_SUPREMACIST_EXTREMISM": {
      "score": 0.012745199725031853
    }
  },
  "segment": "gaming_chat",
  "author": "user_123"
}

If an error occurs, the JSON response will be similar to:

{
  "error": "'text' is a required property"
}

HTTP Request

GET https://api.sentropy.io/v1

Payload

id (string)

An identifier for the text in this payload, unique among requests for a given authorized Abuse API user.

author (string)

An identifier for the author of the text in the payload.

segment (string)

An identifier for the channel, room, "server", or other community that is under evaluation.

text (string)

The text to be scored by the Abuse API for toxic content.

Response Object

Attributes

id (string)

This is the identifier provided on the request payload, unmodified.

author (string)

This is the author identifier provided on the request payload, unmodified.

segment (string)

This is the segment identifier provided on the request payload, unmodified.

judgements (object of objects)

This is an object keyed by all class and subclass labels supported by the API, and valued by Judgement objects.

error (string)

This field will be present and populated with an error message if the request fails.

Judgement Object

Attributes

score (float, nullable)

This is the estimated likelihood of the class label key to this Judgement. It is valued on the closed interval [0.0, 1.0].

For some labels, the API may make no claim about the given payload. In these cases the score is null. When a null score is observed for a class label, it is usually because the score for the class label's superclass is very low or null itself.

Bulk Api

curl "https://api.sentropy.io/v1/bulk" \
  -H "Authorization: Bearer $TOKEN" \
  -d '{
  "messages": [
    {
      "id": "4m8aw",
      "author": "user_123",
      "segment": "gaming_chat",
      "text": "rl go to their offices and shoot everyone"
    },
    {
      "id": "fw837",
      "author": "user_456",
      "segment": "gaming_chat"
    }
  ]
}'
import os
import requests

request_data = {
  "messages": [
    {
      "id": "4m8aw",
      "author": "user_123",
      "segment": "gaming_chat",
      "text": "rl go to their offices and shoot everyone"
    },
    {
      "id": "fw837",
      "author": "user_456",
      "segment": "gaming_chat"
    }
  ]
}

TOKEN = os.environ["TOKEN"]
requests.post(
    "https://api.sentropy.io/v1/bulk",
    headers={"Authorization": f"Bearer {TOKEN}"},
    data=request_data,
)

This api can be used to submit up to 100 messages for judgement in a single POST call.

The individual messages in the array follow the same rules as the single message endpoint.

An error in one of the messages will not cause the entire request to fail.

The above request will receive a JSON response payload similar to:

{
  "results": [
    {
      "id": "4m8aw",
      "judgements": {
        "IDENTITY_ATTACK": {
          "score": 0.04475459083914757
        },
        "IDENTITY_ATTACK/ABILITY": {
          "score": null
        },
        "IDENTITY_ATTACK/AGE": {
          "score": null
        },
        "IDENTITY_ATTACK/ETHNICITY": {
          "score": null
        },
        "IDENTITY_ATTACK/GENDER": {
          "score": null
        },
        "IDENTITY_ATTACK/POLITICAL_GROUP": {
          "score": null
        },
        "IDENTITY_ATTACK/RELIGION": {
          "score": null
        },
        "IDENTITY_ATTACK/SEXUAL_ORIENTATION": {
          "score": null
        },
        "INSULT": {
          "score": 0.0002767886617220938
        },
        "PHYSICAL_VIOLENCE": {
          "score": 0.9987996816635132
        },
        "SELF_HARM": {
          "score": 0.00015191955026239157
        },
        "SEXUAL_AGGRESSION": {
          "score": 0.00003632059815572575
        },
        "WHITE_SUPREMACIST_EXTREMISM": {
          "score": 0.012745199725031853
        }
      },
      "segment": "gaming_chat",
      "author": "user_123"
    },
    {
      "error": "'text' is a required property"
    }
  ]
}

HTTP Request

GET https://api.sentropy.io/v1/bulk

Payload

messages (list of objects)

A list of messages with each message following the same structure as the payload for the single message endpoint.

Response Object

Attributes

results (list of objects)

A list of objects where each object in the list follows the same structure as the single message response and has the same order as the request messages.

error (string)

This field will be present and populated with an error message if the request fails.

Rate Limits

The Abuse API is rate limited. Authorization keys that exceed the rate limit will receive an HTTP 429 response.

The default rate limit is 1 request per second. For information about your rate limit or to increase your rate limit, please reach out to apiaccess@sentropy.io.

Abuse Class Definitions

Identity Attack

The IDENTITY_ATTACK class includes statements containing severe verbal attacks, threats, or hatred directed at people based on a shared identity such as gender, race, nationality, sexual orientation, etc. Language may contain dehumanizing speech (slurs and other vulgarities), statements of inferiority, expressions of contempt/disgust, or calls for violence, exclusion and/or segregation.

IDENTITY_ATTACK includes attacks that are not directed at a specific individual but which reference an identified protected group or class. It does not include attacks on institutions (e.g. religions, governments, countries, political groups, etc.) but does include attacks on the individuals that belong to those institutions (e.g. Muslims, Mexicans, communists).

Ability

The IDENTITY_ATTACK/ABILITY subclass describes an identity attack directed at those having a physical or mental condition or disability. The statement may contain explicit references to, or descriptions of, conditions or disabilities.

Age

The IDENTITY_ATTACK/AGE subclass describes an identity attack directed at those belonging to a defined generation or any age group.

Ethnicity

The IDENTITY_ATTACK/ETHNICITY subclass describes an identity attack directed at those belonging to an ethnicity, race, nationality, or geographical region.

Gender

The IDENTITY_ATTACK/GENDER subclass describes an identity attack directed at those identifying with a specific gender.

Political Group

The IDENTITY_ATTACK/POLITICAL_GROUP subclass describes an identity attack directed at members of a political group.

Religion

The IDENTITY_ATTACK/RELIGION subclass describes an identity attack directed at members of a religious group.

Sexual Orientation

The IDENTITY_ATTACK/SEXUAL_ORIENTATION subclass describes an identity attack directed at those identifying with a sexual orientation.

Insult

The INSULT class contains insults directed at a person in a conversation. An insult may refer to a person's physical traits (including race, sex, appearance), intelligence, personality, or behavior. Insults may contain profanity or other offensive language, but it is not a prerequisite.

The subject of the insult should be someone involved in a conversation, not a celebrity, fictional character, or other third person. Self-directed insults are not considered positive instances of this class. Statements that use insults to describe concepts or non-human objects are not considered part of this class. Quoting or explaining an insult is not considered part of this class.

Physical Violence

The PHYSICAL_VIOLENCE class describes text that meets any of the following criteria:

Self Harm

The SELF_HARM class describes text that meets any of the following criteria:

User language referring only to depression, symptoms of depression, or depressed thoughts and feelings that are not tied to self-harm or suicide are not positives for this class. Recovery stories related to the topic of self harm and/or suicide are not considered part of this class.

Sexual Aggression

The SEXUAL_AGGRESSION class describes text that contains obscene, graphic, sexual language directed at a person, including threat of unwanted sexual acts. This includes at least one of the following:

White Supremacist Extremism

WHITE_SUPREMACIST_EXTREMISM is content seeking to revive and implement the ideology of white supremacists. Ideologies encompassed by this class can include belief in any or all of the following: white racial superiority, white cultural supremacy and nostalgia, white nationalism, eugenics, Western traditional gender roles, racism, homophobia, xenophobia, anti-Semitism, Holocaust denial, Jewish conspiracy theories, and praise of Adolph Hitler.

WHITE_SUPREMACIST_EXTREMISM ideologies can be generalized into three categories which often overlap with each other: Neo-Nazism, White Racial Supremacy, and White Cultural Supremacy. While the expression of anti-Semitic, racist, xenophobic, and homophobic content overlaps with IDENTITY_ATTACK, we view references to the beliefs listed above as unique to this class.

Errors

The API uses the following error codes:

Error Code Meaning
422 Unprocessable Entity -- The API has limitations that may cause it to return this error code with the error message “Unable to process request” These limitations are:
- Text fields containing individual sentences with more than ~40 words.
- Text fields containing more than 200 sentences.
- Text fields only containing non-alphanumeric characters
400 Bad Request -- the request is missing fields, has extra fields, or has incorrectly typed fields.
401 Unauthorized -- the request is made without an api token.
403 Forbidden -- the API token is present but improperly formatted or not valid.
429 Too Many Requests -- rate limit exceeded