Well, Markham did have the problem of not getting the funding he was after.
But, also, it's probably a much harder problem than he realizes. Getting lots of little things right in each specific region is no guarantee that you also get the global constraints working. He is obviously well aware of this -- but there is sure to be lots and lots of basic science on just what those global constraints are, that you need to get exactly right.
A group at Google working on something called the Neuromancer Project has been applying machine learning to help nail down both the micro-scale and global constraints in order to simulate a brain: https://arxiv.org/abs/1710.05183
I personally think simply imitating the brain using Deep Learning applied to meso-scale (1 mm^3 voxels) recordings at fairly high temporal resolution 100 milliseconds, and also behavioral and stimulus data, is a much more productive route to achieving a "digital brain". Gwern wrote a nice explainer (shorter and more to-the-point than ones I've written): https://www.reddit.c...ation_learning/
I have not budged in my thinking that this will be an important component in the first functioning AGI system.
With enough data, you don't need to "understand" what the brain is doing (as much), or know as much of the basic science (just like how machine learning practitioners can build machine translation systems without knowing much about linguistics or the languages they are translating between -- e.g. they don't need to even know the basic grammar) and you don't need to be able to scan it in microscopic detail -- Machine Learning will fill in the gaps in functionality from what you are able to scan.
Furthermore, there are several teams working on this "imitation" approach at the moment, and using comparatively small funding (only a few million dollars, not the billion+ sought by Markham).