Custom Integration

This document describes the API for publishing data to a custom integration.

Publish data

Publish resource data to a custom integration.

This endpoint accepts a list of resource data objects, each containing the current state of a resource to be processed. This data will be processed and if successful, the results will be available in the Custom Automated Tests feature, as well as the Asset Inventory and/or other Secureframe features.

Early access: This endpoint is currently only available to customers in our Early Access program. Please contact Secureframe to request access. All details are subject to change.

New schemas

If a schema does not already exist for the provided slug, one will be created automatically based on the structure of the first resource data object in the request.

After a schema is created, it must be defined using the "Define Schema" workflow in the Secureframe application interface in order for the data to be processed.

Resource data format

You can provide us data in any JSON structure you like, as long as all resources in a schema follow the same format. We recommend you provide the data to us with as few changes as possible from the original format in your system or as provided by the vendor.

At least one field must uniquely identify the resource within the schema, and be set as its "primary ID" when defining the schema. The primary ID cannot be changed after the schema is created.

The data may be nested (e.g. { "name": { "first": "Alan", "last": "Turing" } }). However, arrays are not currently supported and will be ignored.

Supported model mappings

When you send resource data to Secureframe using schemas that have a model mapping set up, the data will be mapped to corresponding models in the Secureframe application.

You can structure your data in any way that makes sense for your systems, as long as all required fields are present for the chosen category.
The field names in your data don't need to match our internal field names - the mapping is handled automatically based on the schema definition.
We recommend providing data in a format as close as possible to the examples below. This approach simplifies integration and makes it easier to properly map your data to our system.
After processing, your data will appear in the appropriate sections of the Secureframe application, such as Asset Inventory, Personnel, or Training Records, depending on the schema's category.

Cloud resources

Cloud resources are the resources used for creating and updating any cloud-based or on-premise servers, virtual machines, VPNs, databases, etc. These should be used any time you want to import data about hardware or cloud-based infrastructure components that are in scope for your compliance plan.

Example resource data for a cloud resource:

{
  "id": "123", // required, type: string
  "cloud_resource_type": "aws_ec2_instance", // required, type: string, supported values listed below
  "description": "My Cloud Resource" // optional, type: string
}
Supported values for `cloud_resource_type`
  • account
  • acm_certificate
  • alert
  • analytics
  • api_management_service
  • app_gateway
  • app_workflow
  • athena_workgroup
  • audit_config_all_services
  • autoscaling_group
  • batch_account
  • cdn
  • certificate
  • cloudfront_distribution
  • cloudtrail
  • cloudwatchlogs_log_group
  • cluster
  • cluster_node_pool
  • compute_backend_service
  • compute_disk
  • compute_instance
  • compute_network
  • compute_subnetwork
  • compute_target_http_proxy_list
  • compute_url_map
  • configservice_recorder
  • container_cluster
  • container_registry
  • crypto_key
  • database
  • database_backup
  • database_firewall_rule
  • database_replica
  • datalake_analytics
  • datalake_storage
  • dms_instance
  • diagnostic_setting
  • dns_managed_zone
  • docker_image
  • domain
  • domain_record
  • droplet
  • droplet_neighbor
  • dynamodb_table
  • ec2_image
  • ec2_instance
  • ec2_security_group
  • ec2_snapshot
  • ec2_subnet
  • ec2_volume
  • ec2_vpc
  • ec2_vpc_peering_connections
  • ecr_repository
  • efs_filesystem
  • eks_cluster
  • elasticloadbalancing
  • elastictranscoder_pipeline
  • elb
  • elbv2
  • elbv2_listener
  • es_domain
  • event_hub
  • firehose_stream
  • firewall
  • floating_ip
  • fsx_file_system
  • glacier_vault
  • guardduty_detector
  • heroku_addon
  • heroku_app
  • iam_certificate
  • iam_group
  • iam_mfa_device
  • iam_password_policy
  • iam_role
  • iam_user
  • iam_account
  • iam_credential_report
  • image
  • iot
  • key
  • key_vault
  • keyring
  • kinesis_stream
  • kms_key
  • lambda_function
  • load_balancer
  • log_alert
  • log_profile
  • metric
  • microsoft_compute_virtualmachines
  • microsoft_compute_virtualmachines_scaleset
  • microsoft_container_images
  • microsoft_dbformysql_servers
  • microsoft_dbforpostgresql_servers
  • microsoft_sql_servers
  • microsoft_sql_servers_blob_auditing_policy
  • microsoft_sql_servers_databases
  • microsoft_storage_storageaccounts
  • monitoring_alert_policy
  • nat_gateway
  • network_load_balancer
  • network_watcher
  • organizations_account
  • policy_assignment
  • project
  • project_resource
  • public_ip_address
  • rds_cluster
  • rds_instance
  • rds_snapshot
  • redis_service
  • resourcemanager_project
  • redshift
  • region
  • registry
  • registry_repository
  • route53domain
  • route_table
  • s3_bucket
  • sagemaker_notebook
  • search_service
  • security_auto_provisioning_setting
  • security_contact
  • security_group
  • servicebus
  • service_account
  • ses_dkim
  • ses_ruleset
  • snapshot
  • sns_topic
  • space
  • space_cor
  • sql_instance
  • sqs_queue
  • ssl_proxy
  • ssm_instance
  • ssm_parameter
  • storage_bucket
  • storage_container
  • storage_volume
  • subscription
  • transfer_server
  • virtual_network
  • vpc
  • vpc_member
  • web_app_service
  • xray_encryption_config

Training records

Training records specify that a certain person has completed a training course. These should be used when you have data from an external training system and you want to bring that data into Secureframe.

Example resource data for a training record:

{
  "id": "123", // required, type: string
  "completed_at": "2024-01-01T00:00:00Z", // required, type: datetime, format: ISO 8601
  "user_email": "john.doe@example.com", // required, type: string
  "training_slug": "security_awareness_training" // required, type: string, supported values listed below
}
Supported values for `training_slug`
  • ccpa_training
  • gdpr_training
  • hipaa_training
  • pci_secure_code_training
  • pci_training
  • security_awareness_training

Personnel accounts

Personnel accounts are records that a person has an account granting them access to a certain system.
You can use this to import information about these accounts into the Secureframe application's Personnel section.

Example resource data for a personnel account:

{
  "id": "123", // required, type: string
  "email": "john.doe@example.com", // required, type: string
  "secondary_email": "john.doe2@example.com", // optional, type: string
  "username": "john.doe", // optional, type: string
  "first_name": "John", // optional, type: string
  "preferred_first_name": "John", // optional, type: string
  "last_name": "Doe", // optional, type: string
  "admin": false, // optional, type: boolean
  "active": true // optional, type: boolean
}

Processing

This data will be processed asynchronously, and a 202 Accepted response indicates the data is enqueued for processing. There may be a delay before the data is available in the Secureframe application, especially if a large amount of data is being processed.

Data for a given connection will be processed in the order it is enqueued, even if provided in separate requests or for separate schemas.

Partial updates

By default, the data in a request will replace the existing data for the resource and is expected to be the complete state of a resource. To perform a partial update, you can set the partial parameter to true. When processing a partial update, only the fields specified in the request will be updated. Any fields not specified in the request will be left unchanged. If you want to delete a field, you can set its value to null.

Securityheader_authorization
Request
path Parameters
id
required
string <uuid>

The ID of the custom connection.

Request Body schema: application/json

request

schema_slug
required
string

The slug identifying the data type this data conforms to. This is used to identify the schema to use for processing the data.

vendor_slug
required
string

The slug identifying the vendor this data should be attributed to. This must match the vendor configured for the connection.

resource_data
required
Array of objects

An array of objects representing the current state of a set of resources provided by this data source.

partial
boolean

If true, the data will be processed as a partial update. If set, only the fields that are present in the data will be updated, and any fields that are not present will be left unchanged. The primary ID of the resource must always be present in the data.

Responses
202

Accepted The data is enqueued for processing.

400

Bad Request The request body was not in the correct format.

401

Unauthorized The Authorization header was invalid.

403

Forbidden The API key provided was not authorized to push data for this custom connection.

404

Not Found A custom connection could not be found for the provided ID.

post/custom_connections/{id}/data
Request samples
application/json
{
  • "schema_slug": "users",
  • "vendor_slug": "acme",
  • "resource_data": [
    ]
}