October is National Cyber Security Awareness Month (NCSAM). I've been doing security and vulnerability research since 1994, and a lot has changed in the industry. For this post, in honor of NCSAM, I'm going to revisit my first CVE (Common Vulnerabilities and Exposures), and offer some general observations and stories from the past.
Creation of CVE
In the late 90s, hackers who discovered vulnerabilities would sometimes send an email to Bugtraq with details. Bugtraq was a notification system used by people with an interest in network security. It was also a place that might have been monitored by employees of software companies looking for reports of vulnerabilities pertaining to their software. The problem was - there wasn't an easy way to track specific vulnerabilities in specific products.
It's worse if you have multiple, similar vulnerabilities, impacting the same version of the software being tested. How do you discern between them? What the industry needed was a centralized way to track these flaws. As fate would have it, the folks over at MITRE wrote up a white paper outlining the requirements and creation of a vulnerability database, and thus, CVE was born.
At the time, at least in my mind, CVEs and their assignments were ordained by mysterious entities at a government facility. They were a government shadow organization. I knew nothing about MITRE then. I remember being somewhat intimidated by them, since getting a CVE assigned meant your research was audited and reviewed with a fine-toothed comb. It was a mark of approval by the folks that cared for and tracked vulnerabilities.
My first CVE
It was May 1999. I was working as a system administrator for Bath Iron Works under contract by Computer Sciences Corporation. Specifically, I was a UNIX Systems Administrator, level one. Our team managed over 3,000 UNIX systems across BIW's campuses. Most of these were CAD systems used for designing AEGIS class destroyers. This position gave me access to over 3,000 various flavors of UNIX ranging from Sun Solaris to IBM AIX.
Among them existed an SGI lab consisting of around twenty-five purple SGI Indigo 2 desktops. These systems were running SGI's standard operating system IRIX version 6.4.5. This operating system was notoriously riddled with various security vulnerabilities.
During the late 1990s, the security trend was attacking servers and abusing setuid binaries to escalate privileges. This was typically done by buffer overflow, or some other means of modifying critical system files like /etc/passwd, and then adding an account.
We had a server room where an SGI Onyx/2 resided, where only a select few individuals were allowed access. Those select few routinely chided the rest of us about their access to the $500,000 system.
Fig 1: An SGI Onyx/2 Credit: JC Penalva
One of my tasks as a newly hired system administrator was to assess the security of our systems by performing penetration tests. I landed this role by demonstrating to my new manager, on the first day, how easy it was to login to SGI machines using the LP (line printer) account after he promised to give me an account once I proved myself as a competent administrator.
I really wanted to get root access on the Onyx/2, and decided to examine one of the SGIs I already had access to, and see if there was a way to get from LP to root. These systems by default had a LP user account with no password, so getting a login shell wasn't a problem.
I knew I should examine all setuid root binaries and see what types of operations they would enable. Would they allow me to read and write to files on the system as root? I noticed one setuid binary called Midikeys that looked very interesting.
SGI IRIX Midikeys, as seen in Figure 2, is a simple midi keyboard that pops up on your screen that you can play a little tune on.
Fig. 2 - The SGI IRIX Midikeys interface
What I noticed is that Midikeys allowed users to open and save files, so you had a place to store your midi files in. It also allowed me to open /etc/passwd and edit it. It also made it so you could set your editor to /bin/sh and it would pop out a root prompt. This was how I was able to get root on the Onyx/2.
I quickly logged into the Onyx/2 with the LP account, and checked that it had a setuid root /usr/sbin/midikeys binary. It did, so I went to edit /etc/passwd and added an account for myself with id 0 gid 0 with no password. With that done, I set a password with the passwd command, and realized I was changing the password for the root account, as my id was 0.
So, I hit CTRL-D to back out of the command, but IRIX changed the root password to CTRL-D. All of this occurred during a demo for a navy admiral on the Aegis destroyer class ship that Bath Iron Work builds.
They were unable to login to the Onyx/2 to run the demo. Luckily, the system administrator had a root shell open on his desktop elsewhere and was able to reset the password. Said administrator was understandably upset with me, but more to the point, that Onyx/2 contained classified CAD drawings of US naval ships, and it was proven insecure.
Software Vulnerability Paradigm Shift
Over the years, the attack surface has shifted. Back then, most researchers and hackers would want to get a root shell on a server in order to wipe access logs and hide their tracks. Sometimes, they'd use their access to turn the compromised server into a storage depot for warez, or pirated materials.
These days, automated attacks have replaced manual ones, and they focus mostly on getting a web shell, or exploiting various vulnerabilities such as, Cross-Site Scripting (XSS) + Cross-Site Request Forgery (CSRF), or Remote Code Execution (RCE) on a web application. The result of these modern attacks is compromised assets being added to a botnet, or used for cryptocurrency mining.
What I'm saying is, the servers are no longer the bastion hosts, but the client-side systems now need to be. In the past, you wanted to avoid involving the user when attacking a system. There was a risk that the system administrator would be logged-in while you're going about your tasks. They might notice, and then it would be game over.
Currently,client-side attacks have shot up in value, as XSS has been realized as an important vulnerability. Client vulnerabilities have become even more valued with the prevalence of web applications in day-to-day life, and web-based control panels that hold the keys to access vast troves of data. So, the vulnerability topology has certainly changed.
Back in 1999, when I discovered the vulnerability in Midikeys, I wrote a quick email to Bugtraq and wiped my hands of it. Such actions would be frowned upon these days, since dropping a zero-day leading to root on (what was then) a common operating system without notifying the vendor is considered irresponsible.
Today, the notification process is completely different. If I were to discover a vulnerability, I would first contact the vendor with a well-written, and well-formatted advisory. I would then give the vendor two to four weeks to respond to my request, before publishing the details. If the vendor were to respond that they needed additional time to address the issue, I would work with them on timing before disclosure.
I learned the disclosure lesson years ago the hard way, after dropping a 0-day in Centrify's host management software on Bugtraq. In an instant, I destroyed an entire December weekend for their development team in 2012. Compounding things, the impact of such a disclosure was particularly sinister, as Centrify is a software security company.
A MITRE CVE CNA
In early 2016, MITRE began to struggle under the weight of all the new CVE requests. They piloted a program where companies and researchers could assign their own CVE numbers from a block delegated to that company or researcher. Being your own Certified Numbering Authority (CNA) is similar to being deputized by law enforcement, so I was somewhat shocked and excited to be considered as a CNA myself.
I had been invited to their Bedford, MA campus to meet with the CVE assignment team. There had been some media coverage on myself and others in the field in regard to the issues with CVEs. A fellow researcher Kurt Siegfried created an alternative solution to CVE called the DWF, or Distributed Weakness Filing. It was an open-source solution to the CVE problem, and used GitHub to store all of its data. So, walking to the Bedford, MA MITRE campus was rather intimidating. I joked on twitter about the knight meeting with the southern oracle in the movie 'The NeverEnding Story'.
For my most recent CVE credit, I returned to my roots, and examined something from my past. While looking around Solaris 11 x86, I noticed some binaries that created files in /tmp insecurely.
Using a combination of tools like l0phtwatch and some of my own, I found CVE-2020-14724. This is a vulnerability in DDU (Oracle Solaris 11 Device Driver Utility) that allows a local user to elevate their privileges to root if the root user runs the DDU command.
It exploits some file operations DDU makes in the /tmp directory, and it can lead to clobbering critical files, creating a Denial-of-Service condition for the target system. It's also possible to use a chmod 666 operation to change permissions of /etc/shadow to be world-writable, and therefore allow a local user to edit the shadow file.
I wrote an exploit that automates this process and pops out a root shell. This was a bit of an old school vulnerability, and quite nostalgic of the 1990s with it being /tmp race condition vulnerability.
I enjoy finding vulnerabilities and exploiting them. It's why I'm still looking for vulnerabilities when I can today. I also spend some of my time helping other researchers get CVE numbers assigned and disclose vulnerabilities they've discovered responsibly. My position in the Akamai SIRT allows me to protect Akamai's network and our customers while also contributing to the security of the internet as a whole. I finally understand the saying, "do what you love and you'll never work a day in your life."