Known limitations
This feature is experimental and not ready for use in production. It is only available as part of an Early Access Program, and can go under breaking changes until general availability. |
Project data (BigQuery → GDS)
-
GDS schema requirements apply—only numeric properties are currently supported when creating a graph projection.
-
GDS node IDs must be positive signed integers—this may require data engineering to create valid input data for loading into AuraDS.
Write data back (GDS → BigQuery)
-
Nodes or Relationships being written must fit in memory of the Spark driver—streams from GDS must be consumed eagerly before transformation and write-back to BigQuery can occur, so the Spark environment must be sized appropriately.
-
Only a single Node stream or Relationship stream per job—if you require writing both nodes and relationships or multiple types of nodes (based on label patterns) you will need to run multiple stored procedures.
-
BigQuery Tables must exist prior to execution—the procedure will not create the Table; additionally the Table must have a valid node or relationship schema
-
Writes use Committed type streams—data written back to BigQuery is appended to the target Table and is live as each batch is written. Any failure in this process may result in partial writes to the table (not all nodes/relationships written back).