I think most of the article is as bad as the part you are criticizing. The part you refer to is just particularly bad in it's presentation. Making unsupported claims while declaring oneself correct and anyone who disagrees is not sane is not helpful or productive.
There is precedent that these types of splits don't make things as safe as advocates believe. Amusingly, the precedent is part of the foundational bits of modern theoretical CS :)
The short version is to look at the early twentieth-century work on formal systems, languages and metalanguages, etc., and then notice how those particular sandcastles got kicked over by the incompleteness theorems.
By the way, personally I tend to look at type systems in terms of the tradeoffs they make. It certainly seems to me like the diminishing-returns point -- where writing verifiable code within a "more expressive" type system becomes more work than writing unverifiable code in a "less expressive" system and dealing with any bugs after the fact -- has already been passed by some of the popular FP languages.
I think you misunderstand me. I'm not talking about computation languages. I'm talking about type languages (as was the OP). As in, the grammatical construction in which types are expressed. For example, OCaml has a separate type language (the module system) from its computation language. Languages like Python and Elixir do not (you can define classes as part of a computation).
The OP is advocating that the type language be one in the same as the language in which computation is performed. I'm suggesting that this leads to usability issues, as it's not clear from the syntax what type expressions are and are not usable by the compiler to check/optimize/etc. the value-level computation.
This is entirely orthogonal to the point you seem to be making about expressive vs. inexpressive type systems.