Feature Request Audit Logs
This feature is currently in Public Preview.
- Must be enabled by Tecton Support.
Feature Request Audit Logs allow you to see what requests are being sent to your production feature serving endpoint.
Use Cases for Feature Request Audit Logs
- Prevent data leakage. You can integrate audit logs with SIEM (like Splunk) and detect malicious activity (abnormal number of requests/login brute force).
- Reports for audit. You can download all logs within a time frame and, for example, ensure that old api keys were rotated and no longer used.
How to Use​
- Contact Tecton Support to enable the feature.
- Once the feature is enabled, You will be able to find logs written to your
Cloud Provider's Object Storage (e.g.
s3://tecton-{DEPLOYMENT_NAME}/logging/audit_logs/
) where your logs will be partitioned by time and written as new line separated json objects.
Possible values for Result
field in requestDetails
include: MissingMetadata,
MissingAuthorization, CouldNotDecodeApiKey, InvalidApiKey, or an empty string
for successful auth.
Successful Request Sample:
{
"requestContents": {
"params": {
"Locator": {
"FeatureServiceName": "test_fs"
},
"workspace_name": "prod",
"join_key_map": {
"user_id": 12345678
}
}
},
"requestDetails": {
"Result": "",
"KeyId": "9d586cef4f91444c8e726f7129fd09ae",
"KeyCreator": "user@email",
"ObscuredKey": "****f698",
"KeyDescription": "Application API Key"
},
"requestTime": "2021-09-16T21:22:49.387109Z"
}
Invalid Key Request Sample:
{
"requestContents": {
"params": {
"Locator": {
"FeatureServiceName": "user_recs"
},
"workspace_name": "prod",
"request_context_map": {
"amount": 1050
}
}
},
"requestDetails": {
"Result": "CouldNotDecodeApiKey",
"KeyId": "",
"KeyCreator": "",
"ObscuredKey": "",
"KeyDescription": ""
},
"requestTime": "2021-09-16T21:26:32.3952655Z"
}
Write Frequency​
In order to optimize the response latency of feature serving requests, request audit logs are batched and written asynchronously. By default, they will be written either every 60 seconds or if file size exceeds an internal limit. This behavior means if the cloud provider experiences an outage, up to 60 seconds of logs could fail to be persisted.