In directory examples you’ll find two useful notebooks that allows you to test Neo4j and PySpark in a cloud-to-cloud environment using Neo4j sandboxes and Google colab.
These notebooks contain a set of examples that explain how the Neo4j Spark connector can fit into your data-driven workflow, and mostly important they allow you to test your knowledge with a set of exercises after each section.
If you want more exercises feel free to open an issue in the GitHub repository.
neo4j_data_engineering.ipynbfile explains how to interact with the Neo4j Spark connector from a Data Engineering perspective, so how to write your Spark jobs and how to read/write data from/to Neo4j;
neo4j_data_science.ipynbfile explains how to interact with the Neo4j Spark connector from a Data Science perspective, so how to combine Pandas (in PySpark) with the Neo4j Graph Data Science library for highlighting frauds in a banking scenario.