[8] Early data lakes, such as Hadoop 1.0, had limited capabilities because it only supported batch-oriented processing (Map Reduce).
[10] PwC was also careful to note in their research that not all data lake initiatives are successful.
They quote Sean Martin, CTO of Cambridge Semantics: We see customers creating big data graveyards, dumping everything into Hadoop distributed file system (HDFS) and hoping to do something with it down the road.
The main challenge is not creating a data lake, but taking advantage of the opportunities it presents.
In response to various critiques, McKinsey noted[13] that the data lake should be viewed as a service model for delivering business value within the enterprise, not a technology outcome.