My answer is largely yes. This whole idea of digital computation began with a system of digital communication that only recognized two letters, zero and one. But why did we have to invent assembly, C, Java, ML, and dozens of different languages and not just stick to the fundamental computing principles that used a simple language with two letters which could express any problem that was solvable with computers? I think the reason for the invention of all these languages lies within the main reason why computing was pursued as a promising idea in the first place. As we all might agree, the main reason behind digital computing is to exploit the power of an electrical system for quickly solving the world's practical problems. That includes business, military, health, education, and personal computing needs. This is not simply cracking the enigma machine for which a specific system was designed by Turing. Even for that machine, guess what could have been done if we at least had the 8086 processor, let alone all the fancy computing done at my favorite research university, Virginia Tech, of course.

Moving forward from that incredible application of computing and the power of that simple digital language with two letters and nothing else, if we really had to solve the world's practical problems (as we claim to be doing so today), we must have had came up with a better idea to communicate with the digital power at hand. I'm not down playing the role of electrical engineers in designing more powerful, smarter, and advanced computational infrastructure, which they have achieved nicely, but I'm emphasizing on the fact that even the architectural progress is in debt to the progress done in the language design and the natural need for power hungry software written with highly powerful languages. Taking the argument one step further, within all the wonderful areas of research in computer science, one might say, hey, computational complexity, graph theory, the design of magnificent IP stack, all the advances in systems, parallelism and concurrency, and finally the shocking rise of computer security, these all make what we call computer science today. I cannot agree more. However, look what they all have in common? Aren't they all talking about languages or being affected by languages or call for new languages or beg for changes in existing languages or at least in need for powerful libraries? Surely they are!

As a systems security expert I see the fundamental problem of computer security in the lack of powerful languages that themselves, at least, do not create unwanted vulnerabilities, backdoors, and memory leaks. All we want is a language that does not simply surrender my software to a crafted packet or a malicious URL and eventually turns it into a walking zombie who receives commands from the revolutionary council of malware. That's simple to say, and of course hard to achieve. But I see the light at the end of the tunnel. One good recent attempt is by Mozilla designing the Rust programming language. Rust has basic memory safety features that prevent many problems with dangling pointers, double-free, and arbitrary memory manipulation. And guess what, yes, you have pointers and you can work with many forms of safe pointers. The idea is not simply to limit access to pointers just like what is happening in Java, however, to manage pointers in a safe way. This is achieved by a system of object ownerships, safe libraries that give a careful access to heap and a number of safe concurrency features that exploit the underlying ownership system. One down side of the current version of Rust is the need to use a feature called "unsafe blocks" in which all the language guarantees are gone! I'm sure there will be valuable attempts to address this part of Rust as our research will also be focused on this problem.