Enigma Technologies

[2][3] Using machine learning and artificial intelligence, the company organizes and connects hundreds of sources to provide data about businesses for customers in a variety of use cases, from financial services compliance to B2B marketing and insurance underwriting and lending.

They were frustrated[9] that relevant public data was available, and should have shed light on these two occurrences, but the world still struggled to connect the dots to understand how and why these crises happened or to stop both altogether.

[15] In October 2013, the company was a finalist in the NYCEDC-sponsored "Take the HELM: Hire + Expand in Lower Manhattan" contest,[16] and in June 2014 they were selected as participants in the FinTech Innovation Lab program.

[4] In September 2018, Enigma announced $95 million in new funds to expand its network and platform that connects real-world and enterprise data to power key workflows.

[26][3] Data republished by Enigma was reusable (with attribution) for free by journalists, so it was occasionally used as a primary or corroborating source for analyses on everything from FBI aerial surveillance[27] to house fire incidents[28] to U.S. government shipping records.

[30] This redesign included features that enabled users to identity connections and interpretations within the data quickly and easily by focusing metadata and linked datasets.

[36][37][38][39] Enigma combines hundreds of public and private sources of data, including government agencies, organizations, and websites, into a single database.

Tools are provided in the interface for performing basic statistical analysis, such as finding the minimum, maximum or mean value of any numerical data column.

[42][43] On May 11, 2016, Enigma announced the launch of ParseKit, now called "Concourse", their proprietary software for ETL and data integration, which had been developed internally via dogfooding to acquire their public datasets.

In September 2018, Enigma announced its use of knowledge graphs as the vehicle for ingesting, standardizing, and adapting data from tables into representations of relationships delivered to users.