Logging framework

Neo4j provides logs for monitoring purposes. As of Neo4j 5.0, Neo4j uses Log4j 2 for logging. Log4j 2 can be configured using the two XML files conf/server-logs.xml and conf/user-log.xml. The XML files are passed to Log4j.

Neo4j supports all default Log4j components. You can also add plugins to Log4j by dropping them in the plugin directory. For more advanced usages, see the Log4j official documentation.

The following is a description of how a default Neo4j installation behaves and some typical examples of configuring it to use Log4j.

Log files

By default, the directory where the general log files are located is configured by server.directories.logs.

The following table describes the Neo4j general log files and the information they contain.

Table 1. Neo4j logs for monitoring
Filename Name Description


The user log

Logs general information about Neo4j. For Debian and RPM packages run journalctl --unit=neo4j.


The debug log

Logs information that is useful when debugging problems with Neo4j.


The HTTP log

Logs information about the HTTP API.


The garbage collection log

Logs information provided by the JVM.


The query log

EnterpriseLogs information about queries that run longer than a specified threshold.


The security log

EnterpriseLogs information about security events.


The windows service log

WindowsLog of the console output when installing or running the Windows service.


The windows service log

WindowsLogs information about errors encountered when installing or running the Windows service.

Logging configuration file anatomy

The logging configuration is done using a Log4j 2 XML configuration file.

Neo4j has two different configuration files, one for the neo4j.log, which contains general information about Neo4j, and one configuration file for all other types of logging (except gc.log, see GC Logging).

Where the files are located and which logs are enabled or disabled is managed via the neo4j.conf file.

Table 2. Log configuration settings in neo4j.conf
Configuration setting Default value Description



Path to the XML configuration file for debug.log, http.log, query.log, and security.log.



Path to the XML configuration file for neo4j.log file.



Enable/disable the debug.log. It is highly recommended to keep it enabled.



Enable/disable the http.log.



Must be one of OFF, INFO, or VERBOSE. INFO produces less output than VERBOSE, while OFF completely disables the query.log.

The Log4j 2 XML configuration file consists of 3 major components:

  • Appenders — define output locations, for example, a file, the console, network socket, etc.

  • Layouts — define how the output is formatted, for example, plain text, JSON, CSV, etc.

  • Loggers — route log events to one or several appenders.

You can also configure filters to determine if and what log events to be published and how. For more information, see the Log4j official documentation.

An illustrative Log4j 2 configuration file
<Configuration monitorInterval="30" packages="org.neo4j.logging.log4j">
        <RollingFile name="DebugLog" fileName="${config:server.directories.logs}/debug.log"
                pattern="%d{yyyy-MM-dd HH:mm:ss.SSSZ}{GMT+2} %-5p [%c{1.}] %m%n"/>
                <SizeBasedTriggeringPolicy size="20 MB"/>
            <DefaultRolloverStrategy fileIndex="min" max="7"/>

        <RollingFile name="HttpLog" fileName="${config:server.directories.logs}/http.log"
            <PatternLayout pattern="%d{yyyy-MM-dd HH:mm:ss.SSSZ}{GMT+0} %-5p %m%n"/>
                <SizeBasedTriggeringPolicy size="20 MB"/>
            <DefaultRolloverStrategy fileIndex="min" max="5"/>

        <Root level="INFO">
            <AppenderRef ref="DebugLog"/>

        <Logger name="HttpLogger" level="INFO" additivity="false">
            <AppenderRef ref="HttpLog"/>
Table 3. Breakdown of the illustrative Log4j 2 configuration file line per line
Line(s) Description


Configuration tag with a monitorInterval of 30 seconds and a package namespace of org.neo4j.logging.log4j.
The monitor interval tells Log4j to periodically check the XML file for changes and reload the file if a change is detected.
The package namespace gives access to the Neo4j configuration lookup with ${config:<setting>}.

3, 13

Defines two <RollingFile> appenders. One writes to debug.log and one to http.log. The name must be unique and is used later when referencing the appender.

4, 14

Specifies a filePattern to be used when the file is rolled. The pattern renames the files to debug.log.01 and http.log.01 when they reach the defined trigger.


Defines the layout to use for the DebugLog appender, in this case with the GMT+2 timezone.

8, 17

Defines a size-based trigger. When the size of the files reaches 20 MB, the files are renamed according to the filePattern, and the log files start over.


Defines a rollover strategy, keeping 7 files as history. fileIndex=min implies that the minimum/the lowest number is the most recent one.

24, 25

Defines a root logger with log level INFO. The root logger is a "catch-all" logger that captures everything that is not captured by the other loggers. Everything caught is sent to the appender with the name DebugLog.

28, 29

Defines a logger that matches log events with the HttpLogger target with a log level of INFO or above. The additivity="false" is set to fully consume the log event. If additivity="true" is set, which is the default, the log event is also sent to the root logger.


An appender represents a destination for log events. All Log4j standard appenders are available in Neo4j. For more details, see the Log4j official documentation. A few of the most common appenders are <Console>, <RollingFile>, and <RollingRandomAccessFile>.

<Console> appender

The console appender outputs log events to stdout or stderr.

An example of a console appender
<Console name="console" target="SYSTEM_OUT"> <!-- or SYSTEM_ERR -->
  <PatternLayout pattern="%m%n"/>

<RollingFile> appender

A rolling file appender writes log events to a file. It rolls when certain criteria are met. A standard scheme is to keep one log file daily or roll a log file once a specific size is reached.

An example of a rolling file appender with one new log file each day
<RollingFile name="myLog" fileName="${config:server.directories.logs}/my.log"
  <!-- Layout -->
      <TimeBasedTriggeringPolicy />

The rolling also supports compression of rolled-out files. Adding one of .gz, .zip, .bz2, .deflate, or .pack200 as a suffix to the filePattern attribute causes the file to be compressed with the appropriate compression scheme.

An example of a rolling file appender with zip compression
<RollingFile name="myLog" fileName="${config:server.directories.logs}/my.log"
  <!-- Layout -->
      <SizeBasedTriggeringPolicy size="20 MB"/>

<RollingRandomAccessFile> appender

The <RollingRandomAccessFile> is almost identical to the <RollingFile, with one major exception - all writes are buffered.

The drawback of using this appender is that log events might not be visible immediately and can be lost in case of a system crash. If these drawbacks are acceptable, switching to <RollingRandomAccessFile> can increase the performance of the logging.

Log layouts

The log files can be written in a lot of different ways, referred to as layouts. Neo4j comes bundled with all the default layouts of Log4j 2, as well as a few Neo4j-specific ones. For more details on the default Log4j 2 layouts, see the Log4j official documentation.


The most common layout. The pattern layout is a flexible layout configurable with a pattern string. The pattern consists of different converters that are prefixed with %. An example pattern could be

<PatternLayout pattern="%d{yyyy-MM-dd HH:mm:ss.SSSZ}{GMT+0} %-5p [%c{1.}] %m%n"/>

It contains the following converters:

Table 4. Example pattern layout converters
Converter Description


Date of the log event. The time zone is optional. If omitted, the system time is used.


The log level of the event. Can be DEBUG, INFO, WARN, or ERROR. Adding -5 between the % symbol and the p pads the level to be exactly 5 characters long.


The class where the log event originated from. Adding {1.} after compacts the package names, e.g. org.apache.commons.Foo will become o.a.c.Foo.


The log message of the log event.


System-specific new line.

These are just a few examples. For all available converters, consult the Log4j 2 Pattern Layout documentation.


The Neo4j debug log layout is essentially the same as the PatternLayout. The main difference is that a header is injected at the start of the log file with diagnostic information useful for Neo4j developers. This layout should typically only be used for the debug.log file.

An example usage of the Neo4j debug log layout
<Neo4jDebugLogLayout pattern="%d{yyyy-MM-dd HH:mm:ss.SSSZ}{GMT+0} %-5p [%c{1.}] %m%n"/>


The <JsonTemplateLayout> is equivalent to the pattern layout. For more information, see the Log4j official documentation.

The JSON template layout takes an event template, which can be a file residing on the file system, for example:

<JsonTemplateLayout eventTemplateUri="file://path/to/template.json"/>

Alternatively, you can embed it right into the XML:

        "time": { "$resolver": "timestamp",
          "pattern": { "format": "yyyy-MM-dd HH:mm:ss.SSSZ", "timeZone": "UTC" }
        "level": { "$resolver": "level", "field": "name" },
        "message": { "$resolver": "message" },
        "includeFullMap": { "$resolver": "map", "flatten": true },
        "stacktrace": { "$resolver": "exception", "field": "message" }

There are also a couple of built-in templates available from the classpath, for example:

<JsonTemplateLayout eventTemplateUri="classpath:org/neo4j/logging/StructuredJsonLayout.json"/>
Table 5. Available built-in templates
eventTemplateUri Description


Layout for structured log messages. Only applicable to the query.log and security.log.


Generic layout for logging JSON messages. Can be used for any log file.


Backward compatible JSON layout that will match the Neo4j 4.x query log.



Graylog Extended Log Format (GELF) payload specification with additional _thread and _logger fields.


Google Cloud Platform structured logging with additional _thread, _logger, and _exception fields.


Same layout as the less flexible <JsonLayout>.


Loggers forward log events to appenders. There can be an arbitrary number of <Logger> elements but only one <Root> logger element. Loggers have the possibility of being additive. An additive logger forwards a log event to its appender(s) and then passes the log event to the next matching logger. A non-additive logger forwards a log event to its appender(s) and then drops the event. The root logger is a special logger that matches everything, so if another logger does not pick up a log event, the root logger will. Therefore, it is best practice always to include a root logger so that no log events are missed.

Configuration of loggers
    <!-- Appenders -->
        <Root level="WARN">
            <AppenderRef ref="DebugLog"/>

        <Logger name="HttpLogger" level="INFO" additivity="false">
            <AppenderRef ref="HttpLog"/>

A logger has a level that filters log events. A level can also include levels of different severity. For example, a logger with level="INFO" forwards log events with INFO, WARN, and ERROR. A logger with level="WARN" only logs WARN and ERROR events.

The following table lists all log levels raised by Neo4j and their severity level:

Table 6. Log levels
Message type Severity level Description


Low severity

Report details on the raised errors and possible solutions.


Low severity

Report status information and errors that are not severe.


Low severity

Report errors that need attention but are not severe.


High severity

Reports errors that prevent the Neo4j server from running and must be addressed immediately.

For more details on loggers, see the Log4j official documentation → Configuring Loggers.

Garbage collection log

The garbage collection log, or GC log for short, is special and cannot be configured with Log4j 2. The GC log is handled by the Java Virtual Machine(JVM) and must be passed directly to the command line. To simplify this process, Neo4j exposes the following settings in neo4j.conf:

Table 7. Garbage collection log configurations
The garbage collection log configuration Default value Description



Enable garbage collection logging.



Garbage collection logging options. For available options, consult the documentation of the JVM distribution used.



The maximum number of history files for the garbage collection log.



The threshold size for rotation of the garbage collection log.

Security log

Neo4j provides security event logging that records all security events. The security log is enabled automatically when the configuration dbms.security.auth_enabled is set to true.

For native user management, the following actions are recorded:

  • Login attempts — by default, both successful and unsuccessful logins are recorded.

  • All administration commands run against the system database.

  • Authorization failures from role-based access control.

If using LDAP as the authentication method, some cases of LDAP misconfiguration will also be logged, as well as the LDAP server communication events and failures.

If many programmatic interactions are expected, it is advised to disable the logging of successful logins by setting the dbms.security.log_successful_authentication parameter in the neo4j.conf file:


The following information is available in the JSON format:

Table 8. JSON format log entries
Name Description


The timestamp of the log message.


The log level.


It is always security.


Connection details.


The database name the command is executed on.


The user connected to the security event. This field is deprecated by executingUser.


The name of the user triggering the security event. Either same as authenticatedUser or an impersonated user.


The name of the user who authenticated and is connected to the security event.


The log message.


Included if there is a stacktrace associated with the log message.

An example of the security log in a plain format:

2019-12-09 13:45:00.796+0000 INFO  [johnsmith]: logged in
2019-12-09 13:47:53.443+0000 ERROR [johndoe]: failed to log in: invalid principal or credentials
2019-12-09 13:48:28.566+0000 INFO  [johnsmith]: CREATE USER janedoe SET PASSWORD '********' CHANGE REQUIRED
2019-12-09 13:48:32.753+0000 INFO  [johnsmith]: CREATE ROLE custom
2019-12-09 13:49:11.880+0000 INFO  [johnsmith]: GRANT ROLE custom TO janedoe
2019-12-09 13:49:34.979+0000 INFO  [johnsmith]: GRANT TRAVERSE ON GRAPH * NODES A, B (*) TO custom
2019-12-09 13:49:37.053+0000 INFO  [johnsmith]: DROP USER janedoe
2019-12-09 13:52:24.685+0000 INFO  [johnsmith:alice]: impersonating user alice logged in

Query log

Query logging is enabled by default and is controlled by the setting db.logs.query.enabled. It helps you analyze long-running queries and does not impact system performance. The default is to log all queries, but it is recommended to log for queries exceeding a certain threshold.

The following configuration settings are available for the query log:

Table 9. Query log enabled setting
Option Description


Completely disable logging.


Log at the end of queries that have either succeeded or failed. The db.logs.query.threshold parameter is used to determine the threshold for logging a query. If the execution of a query takes longer than this threshold, the query is logged. Setting the threshold to 0s results in all queries being logged.


DefaultLog all queries at both start and finish, regardless of db.logs.query.threshold.

The following configuration settings are available for the query log file:

Table 10. Query log configurations
The query log configuration Default value Description



Log query text and parameters without obfuscating passwords. This allows queries to be logged earlier before parsing starts.



Log executed queries.



This configuration option allows you to set a maximum parameter length to include in the log. Parameters exceeding this length will be truncated and appended with .... This applies to each parameter in the query.



If true, obfuscates all query literals before writing to the log. This is useful when Cypher queries expose sensitive information.

Node labels, relationship types, and map property keys are still shown. Changing the setting does not affect cached queries. Therefore, if you want the switch to have an immediate effect, you must also clear the query cache; CALL db.clearQueryCaches().

This does not obfuscate literals in parameters. If parameter values are not required in the log, set db.logs.query.parameter_logging_enabled=false.



Log parameters for the executed queries being logged. You can disable this configuration setting if you do not want to display sensitive information.



This configuration option allows you to log the query plan for each query. The query plan shows up as a description table and is useful for debugging purposes. Every time a Cypher query is run, it generates and uses a plan for the execution of the code. The plan generated can be affected by changes in the database, such as adding a new index. As a result, it is not possible to historically see what plan was used for the original query execution.

Enabling this option has a performance impact on the database due to the cost of preparing and including the plan in the query log. It is not recommended for everyday use.



If the query execution takes longer than this threshold, the query is logged once completed (provided query logging is set to INFO). A threshold of 0 seconds logs all queries.



Track the start and end of a transaction within the query log. Log entries are written to the query log. They include the transaction ID for a specific query and the start and end of a transaction. You can also choose a level of logging (OFF, INFO, or VERBOSE). If INFO is selected, you must exceed the time before the log is written (db.logs.query.transaction.threshold).



If the transaction is open for longer than this threshold (duration of time), the transaction is logged once completed, provided transaction logging is set to INFO. Defaults to 0 seconds, which means all transactions are logged. This can be useful when identifying where there is a significant time lapse after query execution and transaction commits, especially in performance analysis around locking.

Example 1. Configure for simple query logging

In this example, the query logging is set to INFO, and all other query log parameters are at their defaults.


The following is an example of the query log with this basic configuration:

2017-11-22 14:31 ... INFO  9 ms: bolt-session	bolt	johndoe	neo4j-javascript/1.4.1		client/	...
2017-11-22 14:31 ... INFO  0 ms: bolt-session	bolt	johndoe	neo4j-javascript/1.4.1		client/	...
2017-11-22 14:32 ... INFO  3 ms: server-session	http	/db/data/cypher	neo4j - CALL dbms.procedures() - {}
2017-11-22 14:32 ... INFO  1 ms: server-session	http	/db/data/cypher	neo4j - CALL dbms.showCurrentUs...
2017-11-22 14:32 ... INFO  0 ms: bolt-session	bolt	johndoe	neo4j-javascript/1.4.1		client/	...
2017-11-22 14:32 ... INFO  0 ms: bolt-session	bolt	johndoe	neo4j-javascript/1.4.1		client/	...
2017-11-22 14:32 ... INFO  2 ms: bolt-session	bolt	johndoe	neo4j-javascript/1.4.1		client/	...
Example 2. Configure for query logging with more details

In this example, the query log is enabled, as well as some additional logging:

db.logs.query.threshold=<appropriate value>

The following sample query is run on the Movies database:

MATCH (n:Person {name:'Tom Hanks'})-[:ACTED_IN]->(n1:Movie)<-[:DIRECTED]-(n2:Person {name:"Tom Hanks"}) RETURN n1.title

The corresponding query log in <.file>query.log is:

2017-11-23 12:44:56.973+0000 INFO  1550 ms: (planning: 20, cpu: 920, waiting: 10) - 13792 B - 15 page hits, 0 page faults - bolt-session	bolt	neo4j	neo4j-javascript/1.4.1		client/	server/>	neo4j - match (n:Person {name:'Tom Hanks'})-[:ACTED_IN]->(n1:Movie)<-[:DIRECTED]-(n2:Person {name:"Tom Hanks"}) return n1.title; - {} - {}

An obvious but essential point of note when examining parameters of a particular query is to ensure you analyze only the entries relevant to that specific query plan, as opposed to, e.g., CPU, time, bytes, and so on for each log entry in sequence.

Following is a breakdown of resource usage parameters with descriptions corresponding to the above query:

2017-11-23 12:44:56.973+0000

Log timestamp.


Log category.

1550 ms

Total elapsed cumulative wall time spent in query execution. It is the total of planning time + CPU + waiting + any other processing time, e.g., taken to acquire execution threads. This figure is cumulative for every time a CPU thread works on executing the query.


Refers to the time the Cypher engine takes to create a query plan. Plans may be cached for repetitive queries, and therefore, planning times for such queries will be shorter than those for previously unplanned ones. In the example, this contributed 20ms to the total execution time of 1550ms.

CPU time

Refers to the time taken by the individual threads executing the query, e.g., a query is submitted at 08:00. It uses CPU for 720ms, but then the CPU swaps out to another query, so the first query is no longer using the CPU. Then, after 100ms, it gets/uses the CPU again for 200ms (more results to be loaded, requested by the client via the Driver), then the query completes at 08:01:30, so the total duration is 1550ms (includes some round-trip time for 2 round-trips), and CPU is 720+200=920ms.


Time a query spent waiting before execution (in ms), for example, if an existing query has a lock which the new query must wait to release. In the example, this contributed 10ms to the total execution time of 1550ms.
It is important to note that the client requests data from the server only when its record buffer is empty (one round-trip from the server may end up with several records), and the server stops pushing data into outgoing buffers if the client does not read them in a timely fashion. Therefore, it depends on the size of the result set. If it is relatively small and fits in a single round-trip, the client receives all the results at once, and the server finishes processing without any client-side effect. Meanwhile, if the result set is large, the client-side processing time will affect the overall time, as it is directly connected to when new data is requested from the server.

13792 B

The logged allocated bytes for the executed queries. This is the amount of HEAP memory used during the life of the query. The logged number is cumulative over the duration of the query, i.e., for memory-intense or long-running queries, the value may be larger than the current memory allocation.

15 page hits

Page hit means the result was returned from page cache as opposed to disk. In this case, the page cache was hit 15 times.

0 page faults

Page fault means that the query result data was not in the dbms.memory.pagecache and, therefore, had to be fetched from the file system. In this case, query results were returned entirely from the 8 page cache hits mentioned above, so there were 0 hits on the disk required.


The session type.


The Browser ←→ database communication protocol used by the query.


The process ID.


The Driver version.


The query client outbound IP:port used.


The server listening IP:port used.


username of the query executioner

match (n:Person {name:'Tom Hanks'})-[:ACTED_IN]→(n1:Movie)←[:DIRECTED]-(n2:Person {name:"Tom Hanks"}) return n1.title

The executed query.

The last two parenthesis {} {} are for the query parameters and txMetaData.

Attach metadata to a transaction

You can attach metadata to a transaction and have it printed in the query log using the built-in procedure tx.setMetaData.

Neo4j Drivers also support attaching metadata to a transaction. For more information, see the respective Driver’s manual.

Every graph app should follow a convention for passing metadata with the queries that it sends to Neo4j:

  app: "neo4j-browser_v4.4.0", (1)
  type: "system" (2)
1 app can be a user-agent styled-name plus version.
2 type can be one of:
  • system — a query automatically run by the app.

  • user-direct — a query the user directly submitted to/through the app.

  • user-action — a query resulting from an action the user performed.

  • user-transpiled — a query that has been derived from the user input.

This is typically done programmatically but can also be used with the Neo4j dev tools.
In general, you start a transaction on a user database and attach a list of metadata to it by calling tx.setMetaData. You can also use the procedure CALL tx.getMetaData() to show the metadata of the current transaction. These examples use the MovieGraph dataset from the Neo4j Browser guide.

Example 3. Using cypher-shell, attach metadata to a transaction

Cypher Shell always adds metadata that follows the convention by default. In this example, the defaults are overridden.

neo4j@neo4j> :begin
neo4j@neo4j# CALL tx.setMetaData({app: 'neo4j-cypher-shell_v.4.4.0', type: 'user-direct', user: 'jsmith'});
0 rows
ready to start consuming query after 2 ms, results consumed after another 0 ms
neo4j@neo4j# CALL tx.getMetaData();
| metadata                                                                 |
| {app: "neo4j-cypher-shell_v.4.4.0", type: "user-direct", user: "jsmith"} |

1 row
ready to start consuming query after 37 ms, results consumed after another 2 ms
neo4j@neo4j# MATCH (n:Person) RETURN n  LIMIT 5;
| n                                                  |
| (:Person {name: "Keanu Reeves", born: 1964})       |
| (:Person {name: "Carrie-Anne Moss", born: 1967})   |
| (:Person {name: "Laurence Fishburne", born: 1961}) |
| (:Person {name: "Hugo Weaving", born: 1960})       |
| (:Person {name: "Lilly Wachowski", born: 1967})    |

5 rows
ready to start consuming query after 2 ms, results consumed after another 1 ms
neo4j@neo4j# :commit
Example result in the query.log file
2021-07-30 14:43:17.176+0000 INFO  id:225 - 2 ms: 136 B - bolt-session	bolt	neo4j-cypher-shell/v4.4.0		client/	server/>	neo4j - neo4j -
MATCH (n:Person) RETURN n  LIMIT 5; - {} - runtime=pipelined - {app: 'neo4j-cypher-shell_v.4.4.0', type: 'user-direct', user: 'jsmith'}
Example 4. Using Neo4j Browser, attach metadata to a transaction
CALL tx.setMetaData({app: 'neo4j-browser_v.4.4.0', type: 'user-direct', user: 'jsmith'})
Example result in the query.log file
2021-07-30 14:51:39.457+0000 INFO  Query started: id:328 - 0 ms: 0 B - bolt-session	bolt	neo4j-browser/v4.4.0		client/	server/>	neo4j - neo4j - MATCH (n:Person) RETURN n  LIMIT 5 - {} - runtime=null - {type: 'system', app: 'neo4j-browser_v4.4.0'}
Example 5. Using Neo4j Bloom, attach metadata to a transaction
CALL tx.setMetaData({app: 'neo4j-browser_v.1.7.0', type: 'user-direct', user: 'jsmith'})
Example result in the query.log file
2021-07-30 15:09:54.048+0000 INFO  id:95 - 1 ms: 72 B - bolt-session	bolt	neo4j-bloom/v1.7.0		client/	server/>	neo4j - neo4j - RETURN TRUE - {} - runtime=pipelined - {app: 'neo4j-bloom_v1.7.0', type: 'system'}

In Neo4j Browser and Bloom, the user-provided metadata is always replaced by the system metadata.

JSON format

The query log can use a JSON layout. In order to change the format, the layout for the QueryLogger must be changed from using the PatternLayout:

<RollingRandomAccessFile name="QueryLog" fileName="${config:server.directories.logs}/query.log"
    <PatternLayout pattern="%d{yyyy-MM-dd HH:mm:ss.SSSZ}{GMT+0} %-5p %m%n"/>
        <SizeBasedTriggeringPolicy size="20 MB"/>
    <DefaultRolloverStrategy fileIndex="min" max="7"/>

to using the JsonTemplateLayout:

<RollingRandomAccessFile name="QueryLog" fileName="${config:server.directories.logs}/query.log"
    <JsonTemplateLayout eventTemplateUri="classpath:org/neo4j/logging/QueryLogJsonLayout.json"/>
        <SizeBasedTriggeringPolicy size="20 MB"/>
    <DefaultRolloverStrategy fileIndex="min" max="7"/>

The QueryLogJsonLayout.json template mimics the 4.x layout and contains the following information:

Table 11. JSON format log entries
Name Description


The timestamp of the log message.


The log level.


Valid options are query and transaction.


Included when there is a stacktrace associated with the log message.

If the type of the log entry is query, these additional fields are available:

Table 12. JSON format log entries
Name Description


Valid options are start, fail, and success.


The query ID. Included when db.logs.query.enabled is VERBOSE.


The elapsed time in milliseconds.


Milliseconds spent on planning.


Milliseconds spent actively executing on the CPU.


Milliseconds spent waiting on locks or other queries, as opposed to actively running this query.


Number of bytes allocated by the query.


Number of page hits.


Number of page faults.


Connection details.


The database name on which the query is run.


The name of the user executing the query. Either same as authenticatedUser or an impersonated user.


The name of the user who authenticated and is executing the query.


The query text.


The query parameters. Included when db.logs.query.parameter_logging_enabled is true.


The runtime used to run the query.


Metadata attached to the transaction.


Reason for failure. Included when applicable.


The transaction ID of the running query.


The query plan. Included when db.logs.query.plan_description_enabled is true.

If the type of the log entry is transaction, the following additional fields are available:

Table 13. JSON format log entries
Name Description


Valid options are start, rollback, and commit.


The database name on which the transaction is run.


The name of the user connected to the transaction. Either same as authenticatedUser or an impersonated user.


The name of the user who authenticated and is connected to the transaction.


ID of the transaction.