Updated by Amalie
FIAM Server-2-Server documentation
This document outlines AudienceProject's S2S solution that allows Publishers and Broadcasters to upload media consumption log files to AudienceProject's measurement system.
AudienceProject's FIAM measurement solution supports three different methods for sending measurement data to the AudienceProject platform.
- Server-to-server (S2S)
While the first two methods (Scripts, SDK) all are implemented client-side in browsers- and applications, the S2S solution is a server-side solution that allows our clients to upload event-log files directly to AudienceProject.
Such server-logs will often originate from systems or platforms that don't support the client-side data collection or where legal considerations make it challenging to implement tags, scripts, or SDKs.
Scripts and SDK implementation guidelines are all well documented in the Website implementation and Native application implementation documentation available on this site. This document only outlines the S2S delivery option.
The S2S solution can be combined with all of the above-outlined data collection methods, but it requires a separate commercial agreement.
The S2S model supports all digital media-types that are capable of being logged. Our full event-taxonomy from AudienceReport is also available for video-streams.
An important consideration is “when” a logged event qualify for being counted as a unique reach building event within the measurement solution? MMF measures both traditional web-sites, native applications as well as video-streams. While the methodological definition of what constitutes a page-view is clear on traditional web-content, there is more ambiguity when measuring native applications as they often are single-page-applications (SPA) by design. And the challenges apply to video-streams, when should video-content in a stream be counted as the equivalent to “unique” page-view?
You'll find the official implementation guidelines that are outlined in the article:
Files need to be uploaded to an AWS S3 bucket owned by AudienceProject. You will be provided a role from AudienceProject that you can assume to access your client-specific S3 bucket. Please reach out to our Customer Success Manager for client-specific bucket information. In return, you need to provide AudienceProject with the 12-digit account ID of your AWS account.
FTP support for uploading to Amazon S3 will be made available later in 2022.
Files uploaded will be retained for 30 days across all folders in the bucket.
The upload bucket is not just a simple drive, it comes with validation and test functionality built-in. You can use the S3 environment to test the composition and format of log-files without submitting them to be included in the measurement, you can pull error-logs and also track the status of log-files uploaded for ingestion into the measurement solution. Each of these elements is governed by different folders within the S3 bucket. Once you have established access to the upload bucket, you will find four folders.
Log files to be ingested should be uploaded to this folder. This is a holding place for log files that are yet to be validated and ingested. You will only have read and upload access to this folder, once a file is uploaded you cannot delete it again.
Log files that passed validation and have been successfully ingested will be moved from “inbox” to “processed”. Successful log files will be moved to a date partitioned path within the folder, i.e:
Log files that failed validation will be moved to this folder from the “inbox” alongside an error file containing details on why the file was rejected. This log file can be found at:
You can upload files to the “validate” folder if you wish to ensure the file type, compression type, and file contents pass the validation. Files that pass the validation will be put into the “processed” folder while failed files will be put into the “error” folder.
Please pay attention to the delivery deadlines outlined in the MMF contract. While AudienceProject’s system can and will process log-files delivered with a delay, such delays might violate the delivery cadence outlined in the MMF contract.
The “technical” ingestion schedule (not to be confused with contractual obligations)
Log files should:
- Ideally be uploaded as soon as they are available. However, batch uploads several times a day will also work.
- Fully uploaded latest at 01.00 UTC in the morning the day after. An example: The daily logs for Monday should be fully uploaded to S3 by Tuesday morning at 01.00 UTC. Log files not present in the S3 bucket by 01.00 UTC will not be processed until the following day.
Upload File format
File Type and Compression
All files must be:
- Tabular-separated values (TSV, using double-quotes for quoting)
- Compressed with gzip
No individual file can be larger than 50 MB compressed (gzip).
The contents of each file must comply with the schema below.
Timestamp in Unix epoch format in seconds. Inputs longer than 10 digits will be truncated.
IPv4/IPv6 address of the client device.
The IP address must not be obfuscated, masked, or hashed.
If the X-Forwarded-For HTTP header is present in your system, then the contents of this header should be specified in this field.
IPv6 should be specified if the client device supports it. Otherwise, IPv4 should be used.
A placement-specific unique identifier given to you by AudienceProject.
This is either a media or section TrackPoint and it should start with “m-” or “s-”.
A unique identifier of the client.
Required on a major part of traffic
“IDFA” for Apple Identifier for Advertisers ID
“IDFV” for Apple ID for Vendors
“AAID” for Android Advertising ID
The value of the Referer header in the request.
An absolute address of the page that makes the request.
Yes, but only for web data
Full application name
Yes, but only for in-app data
Full application version
Yes, but only for in-app data
Type of event that will be measured.
User-agent string of the client device.
TCF 2.0 consent string.
Log file validation mechanism
The S2S solution comes with a built-in file validation mechanism. The file validation mechanism will automatically trigger on all files uploaded to the inbox folder. If the file fails the validation? It will not be processed. Instead the file will be moved to the error folder and an error-log will be deposited with the file in the folder: If the file is approved, it will be deposited in the processed folder.
A validation folder has been created for file format testing. Files uploaded to the validation folder will not be ingested into the measurement, but validated according to the same rules and principles as if it were a “real” log-file.Currently the following log-file checks are in place:
A log file will be rejected if:
- The file doesn’t match the Schema described above, e.g. too many or too few columns, columns not tabular-separated, wrong file format, wrong compression, etc.
- Any individual logline in the file violates at least one of the following rules:
- The point in time is older than 14 days
- The point in time is in the future
- The address is is not a valid IPv4 or IPv6 address
- The address is part of a reserved address block
- IPv4: 10.0.0.0/8, 172.16.0.0/12, 192.168.0.0/16 or 127.0.0.0/8
- IPv6: fc00::/7 or ::1/128
- identifier contains an unhashed email address
- tailcode does not match one of the tail codes provided to you by AudienceProject
These validation rules do not guarantee with 100% that the contents of your log files are correct. It is still your responsibility to perform proper quality assurance checks on the contents of the files delivered.
If your log file contains otherwise invalid data, the file will be ingested into AudienceReport and treated identically with other log data. It is ultimately your responsibility to ensure the correctness of the ingested log data.
AudienceProject will continue to expand the validation rules as identity additional failure modes.
The file-format also offers support for TCF 2.0 consent strings within the delivery to AudienceProject. The field is however not mandatory.
TCF 2.0 consent string.
If a consent string is missing? AudienceProject will per default add the hardcoded consent string outlined in the contract.
If a TCF 2.0 compatible consent string is present? AudienceProjects data processing pipeline will parse and adhere to the consent settings. It is therefore important to stress that any TCF compatible consent string delivered to AudienceProject should be vetted before sending it. One cannot assume that configurations or changes to the existing configuration in the TCF CMP solution deployed by the Publisher automatically will be expressed in the log-file exporter used for generating the logs uploaded to AudienceProject.
It is also important to ensure that if the Publisher takes it upon themselves to generate the consent string instead of relying in the hardcoded string, the new Publisher consent string will be authoritative, In other words, AudienceProjects vendor id and purpose configuration needs to be properly configured at the Publisher side.
The consent requirements and purpose configuration needs to be configured according to MMF guidelines (as per the contract.
Should all big-screen data be uploaded as S2S?
AudienceProject's media measurement solution supports four different methods for sending measurement data to the AudienceProject platform.
- Server-to-server (S2S)
A combination of methods can be used - video-stream events can (often) also be delivered with tags, scripts or SDKs. The S2S solution is a complementary solution, not a replacement.
As a part of the deliverable "4. Server-to-server transfer of event logs to be offered to all MMF members" AudienceProject will also deliver a proposal for a future IP masking/hashing service to allow MMF members to mask/hash IP addresses in 2022.
Since IP addresses currently are in use as part of the MMF agreement and also a requirement in order to deliver the Media Rating Council credited IAB invalid traffic filtering (as well as geographical validation), the proposed masking solution will have to be defined in such a way that there is no loss of functionality in regard to the already agreed MMF deliverables.
What about ftp-support?
The current solution supports AWS S3 uploads. Later 2022 we will add support for SFTP uploads as well.