This FAQ covers common questions about our data platform, mapping tools, monitoring systems, usage expectations, billing model, and operational reliability. It is designed for both new users and organizations evaluating Tocsin Data for operational, research, or analytical workflows.
Find answers to common questions quickly and easily
Tocsin Data builds and operates systems that monitor, analyze, and report on real-world and digital incidents. Our dashboards connect network security, infrastructure health, and open-source intelligence into a single reporting layer.
Our clients range from small organizations that need clear uptime and firewall reports to enterprises and researchers who rely on aggregated OSINT feeds and compliance-ready reporting.
No. The interface was designed for clarity—technical staff can drill into details, but non-technical users can read the same information in plain-language summaries and PDF reports.
All client data is hosted on encrypted infrastructure within North America. Access is restricted by role, monitored continuously, and every data exchange uses TLS encryption.
We combine open-source intelligence (OSINT), internal monitoring, and client-supplied infrastructure data. All collection follows legal and ethical guidelines, focusing on publicly available or client-owned information.
Firewall Reports summarize network activity and threat data collected from your connected domains and Cloudflare zones. They highlight blocked or challenged traffic, top offending IPs, ASN sources, and geographic patterns over a defined time range.
They are automatically created by scheduled jobs (cron) that pull recent firewall event data, process it, and store it in structured tables. Once processed, each report is compiled into both a dashboard view and a downloadable PDF document for recordkeeping.
Firewall Reports usually run daily, but may generate more frequently if high activity or new data is detected. Each run only covers data since the previous report period, ensuring no overlap or duplication.
Each report may contain:
Total requests blocked or challenged
Top IP addresses and ASNs by volume
Most common countries of origin
Request types or paths targeted
Trend comparisons with the previous period
Optional metadata such as domain, date range, and system summary
Yes. Administrators can configure which domains to include, how often reports generate, and what metrics appear. Future updates will add finer controls like per-country filtering, specific rule insights, and export options to external analytics tools.
The IPTV module catalogs and monitors freely available IPTV streams—mainly news, business, and public-interest channels—so they can appear directly within dashboards or media widgets when enabled.
Tocsin Data maintains an evolving list of verified public IPTV URLs collected from open repositories and user submissions. Each source is reviewed for accessibility and categorized by region or topic before inclusion.
Integrating IPTV feeds allows dashboards to display live broadcast context alongside news and data events. For example, users tracking geopolitical or financial incidents can view related live news coverage in one place.
Each IPTV source is automatically checked on schedule to confirm that its stream is live and accessible. If a source goes offline, the system flags it for review and posts a notice to the related Discord channel.
Users can also manually flag a stream as up or down through the dashboard interface. These user reports feed into the same Discord thread, helping confirm issues or identify regional outages faster.
Not directly. IPTV sources are gathered automatically from public repositories and verified GitHub lists, then filtered and categorized by Tocsin Data.
Users can report whether a listed stream is up or down through the dashboard, but cannot submit new source URLs. This keeps the master list clean and prevents unverified or unstable feeds from being added.
The Public Data system provides a curated collection of open-source datasets, directories, and reference links relevant to threat intelligence, cybersecurity, and public information domains. Each entry includes verified descriptions, source links, and update details.
Entries are continuously reviewed and refreshed as new sources become available or existing ones change. Update frequency varies by category—critical feeds and active intelligence lists are checked daily, while static reference data is audited monthly.
Yes. Verified users may suggest new datasets or links through the contact form or integrated dashboard feedback option. Each submission is manually reviewed before publication to maintain accuracy and security integrity.
Most listings link to openly available data. However, some may reference external APIs or paid datasets for completeness.
Each source undergoes an internal validation process to confirm reliability, persistence, and license compliance. Manual reviews ensure metadata and descriptions remain current.
Public Data entries may include external links maintained by third-party organizations or agencies. Tocsin Data provides these listings for informational purposes only. Each item includes a disclaimer clarifying the nature of the connection, source ownership, and any applicable terms or attribution requirements.
A Geographic Information System (GIS) connects data with geography. It allows information such as incidents, assets, or reports to be displayed on interactive maps where location, time, and context all come together.
GIS is at the core of how Tocsin Data organizes and visualizes information. It helps transform large datasets—such as public reports, threat indicators, or infrastructure data—into clear, map-based insights.
Maps may include locations of incidents, network assets, critical infrastructure, and public data sources. Each dataset is processed and updated automatically to ensure accurate, near real-time geographic context.
Most map layers update continuously or on scheduled intervals depending on their source. Some layers, like live feeds or public alerts, refresh as new data is received.
Yes. Many GIS layers and datasets can be accessed through Tocsin Data’s API in formats such as JSON, GeoJSON, KML, or CSV, allowing seamless integration into dashboards, external systems, or analytic workflows.
In many cases. Our system supports data exports, secure webhooks, and API access for select clients, allowing integration with existing monitoring, ticketing, or analysis platforms.
Yes. Tocsin Data offers a flexible API that allows third-party software and custom systems to retrieve data directly. The API supports multiple output formats for easy integration with dashboards, maps, or reporting tools.
Data can be retrieved in JSON, GeoJSON, KML, CSV, and RSS formats. This ensures compatibility with a wide range of platforms — from mapping and GIS tools to automation scripts and content aggregators.
API feeds update automatically based on their source type. Real-time or near-real-time datasets (like incident logs or media feeds) refresh as new data arrives, while scheduled sources (such as reports) update following their cron cycles.
Start with the API docs packaged with your Tocsin Data module (see the Documentation Index). If your deployment doesn’t include the API docs, contact us and we’ll enable access.
Quick start
Pick a format: json, geojson, kml, csv, or rss.
Call the endpoint: standard HTTPS GET with optional filters (e.g., time range, domain).
Authenticate (if required): use your API key in the header or query string as documented.
Examples
Endpoint pattern:
https://api.tocsindata.com/.
With API key (header):
Authorization: Bearer
cURL:
JavaScript:
Tips
since, until, days, or limit filters (where supported) to keep responses small.Other filters are available in the documentation.
Respect rate limits shown in your docs; back off on HTTP 429.
Validate schema versions—feeds may add fields over time.
If you don’t see the API docs in your environment, or need a new endpoint/format, reach out and we’ll provision it for you.
DarkSpider is Tocsin Data’s aggregation and enrichment engine. It primarily ingests RSS feeds from reputable media sources and official feeds like GeoJSON and U.S. government datasets, normalizes them, and surfaces signals relevant to your monitored domains or topics.
Two big expansions:
Continuously, based on feed publish times. Items are pulled on a scheduled cadence; freshness depends on the upstream source. Enrichment and scoring run immediately after ingestion so dashboards reflect new items within minutes of their appearance.
As of November 6 2025, DarkSpider actively tracks 17,080 news and media sites worldwide. The list continues to expand as new, verified RSS and open-data sources are added. Feeds are regularly reviewed for reliability, duplication, and regional diversity to maintain balanced coverage.
It parses, normalizes, deduplicates, and scores items, then attaches context (entities, locations, domains, severity tags). Results appear in dashboards and can feed scheduled reports or alerts based on your filters and thresholds. Geocoding is also available for event items that pass filters.
Dashboards refresh in near-real time. Most network, OSINT, and performance data are pulled and displayed within minutes of detection, keeping the visual indicators current without manual refresh.
Yes. Each dashboard and report can be configured to show specific domains, data feeds, or monitoring modules. Administrators can adjust filters, time ranges, and visual layouts directly in the interface.
Tocsin Data produces several kinds of reports, each with its own update cycle.
Automated reports — such as firewall, uptime, and domain summaries — are generated on scheduled intervals by our internal cron systems. Their frequency depends on the type of data and when the previous report was last updated.
Security and business reports — which combine automated data with human or AI-assisted analysis — are created as needed or on client-defined schedules. These often include broader insights, correlations, and narrative explanations that go beyond automated output.
For most clients, yes. Once initial setup and permissions are complete, automated collectors and cron jobs handle the ongoing data ingestion and reporting. Manual review is only needed for specialized or AI-assisted reports.