So at datadog, after aggregation with spark and storage into parquet, what is used for serving queries of all the datadog aggregated telemetry data (logs, apm and infra telemetry) to the consumers?
(interestingly, we have a nearly identical data ingestion/ETL stack running on spot instances and saving to parquet/s3)
(interestingly, we have a nearly identical data ingestion/ETL stack running on spot instances and saving to parquet/s3)