Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

Streaming API docs

Push Service docs

Pros

Streaming API

Push Service

No additional cost

Easy to implement

Constant open connection and flow of data

Guarantees data will either be delivered or retried (If we miss data, we’ll know about it)

If Stream connection goes down, Validic can queue the data for up to 7 days and once connection is live again, Validic will push all cached data

Can set data Push frequency

Stream sends pokes every 5 seconds to make sure the Stream is live, pokes are even returned in the Replay Stream

Can utilize oAuth or Mutual TLS

If data is missed you have access to the Replay Stream to catch any missed data in the last 30 days.

Can define custom retry codes for retry logic

Resources: https://help.validic.com/space/VCS/2526314497/SSE+Connection+Best+Practices

Hit Rest API endpoints to determine batch statuses

Another userful resource: https://helpdocs.validic.com/docs/working-with-a-stream

Streaming API allows for load balancing

Cons

Streaming API

Push Service

Unidirectional data protocol means Validic can’t confirm receipt of data. Information on the pings in the KBA linked above cover how a client will know if the data is coming or not and suggests logic for knowing when the data stopped flowing. Customer is responsible for implementing logic to ensure that all data is captured.

Additional Annual cost

If data goes missing, Validic has no way of knowing. See the above linked KBA that notes how to tell if the stream goes stale. With proper logic on the customer’s end data will not be missed.

Recovering for more than 6 hours of data requires contacting support

Only way to recover missing data is Replay Stream, but Customers won’t necessarily know when to call replay stream. See above KBA for suggested logic to allow for knowing when data is missing

Not a continuous flow, it's a batching mechanism

Replay stream replays all data over a specified period so that there is a chance of reprocessing duplicate data. See the docs linked above on working with the stream. Customer will need logic on their side to only write checksums they don’t have in their db when running the replay stream so they don’t duplicate data

If wanting FHIR transformation, takes longer to implement and needs data mapping

Implementing the replay stream process will require additional development

Cannot use the Replay Stream with Push

Replay stream is not load balanced, and it’s responsible for processing large spikes of data, so it will be a performance bottleneck

If a batch fails, you will need to contact Support for batch retry

Push service doesn’t allow for load balancing. The entire flow of data has to be ingested through one endpoint.