Marketplace
>
perspectiveToxicity
firestore

perspectiveToxicity

v
0.0.1
Latest Version
Likes
0
Installations
0
Views
0
4

Overview

Analyze the perceived impact a comment might have on a conversation using Perspective API.

Usage

### How this function works Use this function to get toxicity scores from Perspective API for comments written to a Cloud Firestore collection. This function runs Perspective API on the text field and collection you configure. The API uses machine learning models to score the perceived impact a comment might have on a conversation by evaluating that comment across a range of emotional concepts, called attributes. > Note that you will need to enable Perspective API by following the instructions on the [Get Started page](https://developers.perspectiveapi.com/s/docs-get-started) to request API access and then [enable the API](https://developers.perspectiveapi.com/s/docs-enable-the-api). When you install this function, you will specify the attributes you want to receive scores for. Perspective's main attribute is TOXICITY, defined as a rude, disrespectful, or unreasonable comment that is likely to make you leave a discussion. ### Prerequisites - You must have previous knowledge of **Firestore**. - You must have previous knowledge of **Perspective API**. ### Function details In order to complete the installation process, you must add the required info to the form: - **Document Path**: The document path that you'd like this function to listen to. A placeholder should be used for the document ID (e.g., /collection/{docId}). - **PERSPECTIVE_API_KEY**: The API key that will be used to call the Perspective API. To obtain this API key, you must create it on the [GCP API & Services / Credentials page](https://console.cloud.google.com/apis/credentials). - **INPUT_FIELD_NAME**: The name of the document field that contains the text you want to analyze. - **OUTPUT_FIELD_NAME**: The name of the document field that will contain the perspective toxicity of your entry text. - **ATTRIBUTES**: The name of the specific attribute you want to receive scores for. You can add multiple names separated by commas. For help selecting which to receive scores for, see the list of [available attributes](https://developers.perspectiveapi.com/s/about-the-api-attributes-and-languages). - **DO_NOT_STORE**: Set whether or not Perspective API is permitted to store the comment that gets sent in the request (the contents of the input field). Stored comments will be used to improve the API over time. > DO_NOT_STORE should be set to true if data being submitted is private (i.e. not publicly accessible) or contains content written by someone under 13 years old. The following is an example of how to use this function: ``` { INPUT_FIELD_NAME: 'Shut up. You're an idiot!', OUTPUT_FIELD_NAME: { IDENTITY_ATTACK : 0.25810158, INSULT: 0.98741966, PROFANITY: 0.88505524, SEVERE_TOXICITY: 0.7638047, THREAT: 0.16459802, TOXICITY: 0.8897058 } } ``` Whenever you write a string to the field `INPUT_FIELD_NAME` in the `Document Path`, this function does the following: - Processes the perspective toxicity of the text using Perspective API. This generates perspective toxicity of the text. - Adds the perspective toxicity of the string to a separate specified field in the same document. And if the `INPUT_FIELD_NAME` field of the document is updated, then the perspective toxicity will be automatically updated, as well. The `OUTPUT_FIELD_NAME` contains the overall perspective toxicity of the document with the field values. The numeric value is the score, it indicates how likely it is that a reader would perceive the comment provided in the request as containing the given attribute. Score types are different formats for the API to return attribute scores. Currently, the only supported score type is PROBABILITY. Probability scores represent a probability, with a value between 0 and 1. A higher score indicates a greater likelihood that a reader would perceive the comment as containing the given attribute. For example, a comment like “You are an idiot” may receive a probability score of 0.8 for attribute TOXICITY, indicating an 80% likelihood that a reader would perceive that comment as toxic. ### Resources - [Perspective API](https://developers.perspectiveapi.com/s/)
Cost
FREE
Cost
Version
0.0.1
Language
JAVASCRIPT
Created At
Updated At
Workspace
firestore
Tags
perspective
toxicity
AI
© 2023 Function Store