Shake Array Deployment: Clock Synchronization Guidelines#
Scope#
This section provides practical guidance for achieving the closest possible clock synchronization across a network of Raspberry Shake units used in an array deployment.
This section is limited to clock synchronization at data-acquisition startup. It does not describe beamforming theory, analysis methods, or post-processing.
1. Purpose#
In an array deployment, the usefulness of the recorded data depends on all units beginning acquisition with their clocks as closely aligned as possible.
Raspberry Shake units do not operate as phase-locked systems. Small timing differences between units are therefore unavoidable. The objective is to reduce those differences as much as possible by using a common time reference, allowing all units to stabilize, verifying time lock, and coordinating the start of data acquisition.
2. Use a single local NTP server#
All Shake units in the array should use the same NTP server, and that server should be located on the same LAN as the units.
Important
Configure every Shake in the array to use one shared NTP server
Locate that NTP server on the same local wired network as the Shakes
Do not mix public NTP servers or different upstream time sources across the array
This is important because using one shared local NTP source reduces timing differences caused by:
differing upstream NTP sources
differing network paths
differing network latency to remote servers
This provides the most consistent common time reference available in a standard Shake deployment.
3. Allow all units to stabilize after boot#
The units do not need to be booted simultaneously.
Recommended procedure:
Boot all Shake units
Allow them to run undisturbed for 10 to 15 minutes
During this time, allow:
NTP synchronization to settle
system clocks to converge
hardware temperature to stabilize
4. Verify NTP lock before acquisition start#
Before starting acquisition, confirm from a centralized control system that all units are properly synchronized to the shared NTP server.
This verification step should check, for every Shake in the array:
that NTP is locked
that the reported offset is within an acceptable range
that no unit is reporting an abnormal timing condition
This can be done with the same centralized script used to coordinate startup.
Important
Use a centralized script that loops over all Shake IP addresses and reports the NTP status of every unit before acquisition begins.
Any unit not properly synchronized should be corrected before proceeding.
5. Start data acquisition in a coordinated manner#
Clock alignment depends primarily on when the data acquisition service begins, not when the unit itself was booted.
The data acquisition service is: rsh-data-producer
Warning
Synchronized system boot is not required.
Synchronized start of
rsh-data-produceris what determines timing alignment.
Do not use:
systemctl restart rsh-data-producer
A restart introduces unnecessary per-node variance because the stop and start portions occur locally and at slightly different times on each unit.
Preferred procedure
Use a centralized control script to:
loop over all units and stop the service
distribute a short temporary start script to each unit
have that script start
rsh-data-producerat a selected time in the near futureexecute that script on all units
The temporary script may be copied to /tmp on each node and instructed to wait until a chosen wall-clock time before issuing:
systemctl start rsh-data-producer
Note
This is preferred because if all units are synchronized to the same NTP source, scheduling service startup for the same near-future time allows all nodes to start acquisition very close to the same second.
This approach:
avoids sequential loop timing bias
reduces SSH and scheduling variability
aligns startup to the shared system clock across all nodes
Note
This procedure minimizes startup timing differences but does not eliminate them completely. Small residual differences may still exist due to normal operating-system and service-launch latency.
6. Built-in timing behavior (informational)#
Raspberry Shake units include built-in timing correction behavior as part of normal operation.
Internally, timing is checked against NTP at regular intervals (approximately every 50 seconds), and a discrete data-point adjustment may occur when needed.
This behavior is automatic and requires no user action.
The practical implications are as follows:
If all units begin acquisition at approximately the same time, their timing-correction cycles will also begin at approximately the same point.
Small timestamp adjustments may occur during operation as part of normal clock correction.
7. Network and deployment recommendations#
To reduce timing variability further:
use wired Ethernet only
avoid WiFi entirely
keep all units and the NTP server on the same low-latency local network
minimize intermediate network devices where practical
ensure all units run the same software and configuration
8. Advanced timing solution#
NTP provides the best practical timing method for a standard networked Shake array, but it does not provide true phase-locked synchronization between nodes.
Sub-millisecond synchronization across all nodes cannot be achieved with NTP alone.
Achieving higher timing accuracy requires a GPS/PPS timing source on each Shake node.
9. Summary#
For the closest possible synchronization across a network of Shakes:
use a single local NTP server
allow all units to stabilize after boot
verify NTP lock and offset on every node
coordinate startup of
rsh-data-producerusing a scheduled start timeuse a consistent wired network and configuration
When these steps are followed, all units will begin time-stamping their data as closely aligned as is practically achievable in a standard NTP-based Shake array deployment.