5

Of the information entered on the command line (command name and arguments) have to be considered public (everyone can access it using commands like ps).

Is there a known tool to automate this kind of information gathering? This tool would look regularly at the process list, would parse command lines and gather information like usernames and passwords.

I am willing to run this kind of tool on my company's systems to identify bad practices, but I can't seem to find one.

Scott Pack
  • 15,217
  • 5
  • 62
  • 91
Gael Muller
  • 458
  • 2
  • 4

5 Answers5

8

On Linux systems, all of this information is available through the proc interface, and as such is fairly easily scriptable. As a working example (coming from a RHEL6 system) let's look at the rsyslog process.

[user@node1 ~]$ ps aux | grep rsyslog
root      1105  0.0  0.0 248680  1460 ?        Sl   May29   0:42 /sbin/rsyslogd -i /var/run/syslogd.pid -c 4
user     26440  0.0  0.0 103236   824 pts/4    S+   11:38   0:00 grep rsyslog

Right, so the pid is 1105, easy enough. Since the proc system stores processes in the format of /proc/<pid> let's see what data is being presented.

[user@node1 ~]$ ls -l /proc/1105
ls: cannot read symbolic link /proc/1105/cwd: Permission denied
ls: cannot read symbolic link /proc/1105/root: Permission denied
ls: cannot read symbolic link /proc/1105/exe: Permission denied
total 0
dr-xr-xr-x. 2 root root 0 Jun 15 11:39 attr
-rw-r--r--. 1 root root 0 Jun 15 11:39 autogroup
-r--------. 1 root root 0 Jun 15 11:39 auxv
-r--r--r--. 1 root root 0 Jun 15 11:39 cgroup
--w-------. 1 root root 0 Jun 15 11:39 clear_refs
-r--r--r--. 1 root root 0 Jun 15 11:35 cmdline
-rw-r--r--. 1 root root 0 Jun 15 11:39 coredump_filter
-r--r--r--. 1 root root 0 Jun 15 11:39 cpuset
lrwxrwxrwx. 1 root root 0 Jun 15 11:39 cwd
-r--------. 1 root root 0 Jun 15 11:39 environ
lrwxrwxrwx. 1 root root 0 Jun 15 11:39 exe
dr-x------. 2 root root 0 Jun 15 11:39 fd
dr-x------. 2 root root 0 Jun 15 11:39 fdinfo
-r--------. 1 root root 0 Jun 15 11:39 io
-rw-------. 1 root root 0 Jun 15 11:39 limits
-rw-r--r--. 1 root root 0 Jun 15 11:39 loginuid
-r--r--r--. 1 root root 0 Jun 15 11:39 maps
-rw-------. 1 root root 0 Jun 15 11:39 mem
-r--r--r--. 1 root root 0 Jun 15 11:39 mountinfo
-r--r--r--. 1 root root 0 Jun 15 11:39 mounts
-r--------. 1 root root 0 Jun 15 11:39 mountstats
dr-xr-xr-x. 6 root root 0 Jun 15 11:39 net
-r--r--r--. 1 root root 0 Jun 15 11:39 numa_maps
-rw-r--r--. 1 root root 0 Jun 15 11:39 oom_adj
-r--r--r--. 1 root root 0 Jun 15 11:39 oom_score
-rw-r--r--. 1 root root 0 Jun 15 11:39 oom_score_adj
-r--r--r--. 1 root root 0 Jun 15 11:39 pagemap
-r--r--r--. 1 root root 0 Jun 15 11:39 personality
lrwxrwxrwx. 1 root root 0 Jun 15 11:39 root
-rw-r--r--. 1 root root 0 Jun 15 11:39 sched
-r--r--r--. 1 root root 0 Jun 15 11:39 schedstat
-r--r--r--. 1 root root 0 Jun 15 11:39 sessionid
-r--r--r--. 1 root root 0 Jun 15 11:39 smaps
-r--r--r--. 1 root root 0 Jun 15 11:39 stack
-r--r--r--. 1 root root 0 Jun 15 11:35 stat
-r--r--r--. 1 root root 0 Jun 15 11:39 statm
-r--r--r--. 1 root root 0 Jun 15 11:35 status
-r--r--r--. 1 root root 0 Jun 15 11:39 syscall
dr-xr-xr-x. 6 root root 0 Jun 15 11:39 task
-r--r--r--. 1 root root 0 Jun 15 11:39 wchan

I'm running this as a normal user, so some of the information is unavailable. No big deal, because what we really want is that there file called cmdline.

[user@node1 ~]$ cat /proc/1105/cmdline
/sbin/rsyslogd-i/var/run/syslogd.pid-c4[user@node1 ~]$

The arguments look like they're all run together. In fact, they are separated by null characters. You'll get a more friendly display by turning the null characters into newlines (but note this will drop the distinction between separations between arguments and actual newlines in an argument):

[user@node1 ~]$ tr '\0' '\n' </proc/1105/cmdline; echo
/sbin/rsyslogd
-i
/var/run/syslogd.pid
-c
4
[user@node1 ~]$

Granted, this doesn't really give us anything beyond what we got from the ps output. However, depending on what you want it may be more easily scriptable. If we wanted to work exclusively in bash, for instance, you could use this structure to iterate over all processes:

for p in /proc/[0-9]*/cmdline; do
  …
done

Then use that as a file list for processing.

If you are instead into Perl, there exists a module called Proc::ProcessTable that queries the proc system and exposes the same information as an object.

All that being said, if you want to look for passwords on the command line, you may sometimes be disappointed. Some applications somehow work to mask it out, for example MySQL:

[user@node2 ~]$ ps aux | grep mysql
user     7974  0.0  0.1 157116  2732 pts/0    S+   11:47   0:00 mysql -u root -px xxxxxxxxxx database
[user@node2 ~]$ cat /proc/7974/cmdline
mysql-uroot-pxxxxxxxxxxxdashboard[user@node2 ~]$
Gilles 'SO- stop being evil'
  • 51,415
  • 13
  • 121
  • 180
Scott Pack
  • 15,217
  • 5
  • 62
  • 91
  • Awesome answer. I wish I could give you all my upvotes, seriously. – chao-mu Jun 15 '12 at 16:48
  • Given that this question is more or less answered by [Simple example auditd configuration?](http://security.stackexchange.com/questions/4629/5226#5226,) I'm surprised you didn't recommend auditd. I wouldn't recommend sampling here, you'll miss important stuff. Regarding `cmdline`, the arguments are separated by null characters, and `find` is overkill to find the `cmdline` files. – Gilles 'SO- stop being evil' Jun 18 '12 at 01:17
  • @Gilles: It's true, auditd is normally the "right" answer. I think I fell into the classic case of "answering the question as asked, not giving the answer needed". – Scott Pack Jun 18 '12 at 12:50
  • I am accepting this answer because it seems (from the answers and my research) that there is no such tool. This post would be a good starting point for someone willing to write one. – Gael Muller Jun 18 '12 at 15:40
  • don't forget that you may have access to the environment too, so using a password in an environment variable may not be safer than passing it as command line argument. ps auxwwe | grep --color -i pass that said on my modern ubuntu, I cannot see environment for processes that I don't own, so its slightly more secure. – jrwren Jun 21 '12 at 18:38
3

This is the sort of task humans do a lot better than machines. And by that I mean pattern recognition. You could try writing a script that continually calls ps (cutting out the command and then sorting out duplicates with sort -u) and then review the output later. However note that this is now persisting the data that you've already identified as potentially being sensitive. Make sure its permissions are sane or that it is stored in an encrypted form. Review periodically until you feel comfortable with your users and then ditch the script to reduce maintenance costs.

Jumping back to your hypothetical tool, the number of false positives it would generate would be astounding unless the passwords were already known to it. In that case, since you are now sharing sensitive information across users, you are increasing risk not decreasing it.

One method to reduce those false positives is to make the script as simple as possible. For example, only restrict for common commands with consistent arguments that require passwords to be entered on the command line. Even then, shell scripting can get so complex you'll get a bunch of false negatives and positives. The best you can do is regular expressions and that will be a nightmare to implement. This means the value of such a tool diminishes, especially when one considers maintenance cost. I think there are probably better things that an admin could be doing with their time.

Also, this is reactionary approach to security and not a preventative one. Maybe there is a way on your system to prevent users from seeing process information of other users? This is probably highly unlikely, but it may be the better question to ask.

Also note that this information also ends up entering the user's history file, essentially persisting the disclosure. Checking the permissions/ownership of those files might be a useful task. Same thing with logs that might contain passwords or other sensitive information. Also some applications like mysql might record history of commands.

chao-mu
  • 2,801
  • 18
  • 22
  • On FreeBSD, doing `ps auxf` as non administrative user showed me all running processes, their full paths, and their arguments. – Safado Jun 15 '12 at 14:30
  • Thank you Sasfado. I was thinking of netstat. I removed that part of my answer and tried to replace it with useful information. – chao-mu Jun 15 '12 at 16:42
2

Automating such processes is a trivial task really, once you know what you want to be accomplished.

Assuming your systems are linux boxes, bash/python/perl scripts can easily be written to execute commands and parse the results. Such scripts could also compare the differences between the outputs day-to-day. The scripts could be set to execute regularly through the use of cron jobs.

The usefulness of the results would depend on the person monitoring/interpreting it.

The same could presumably be done for windows boxes using batch scripts(not too sure).

1

Hum, you know as I think about this question. I'm not so certain how trivial a task this would be. You want information about each image, presumably you also want information about short lived instances as well.

So if you want all this information you could be talking about using the Windows Filtering Platform and something like epoll for Posix so you can get all of the Event I/O notification.

Then you need a way to persist this data and transport it to a central repository where it can aggregate this data for consumption. This would be a lot of data, it would require a lot of horsepower to do the analysis live. You would probably have to depend on a process/solution like a ETL or Map/Reduce before you could make sense out of it. You could of course just send it to a syslog server and go through it by hand, I would believe that would quickly overwhelm you.

Suppose instead you could do the processing and filtering locally on the machine and only export to a syslog/aggregation server positive matches.

I don't know of a tool that does this specifically, maybe if you have an AV/HIPS product installed that'll let you write a custom rule, you could maybe get close. A custom from scratch application would be a healthy project for a single developer, probably a fun one.

M15K
  • 1,182
  • 6
  • 7
1

Looking regularly at the process list isn't very reliable. Any sampling method is highly likely to miss short-lived processes which a more targeted attacker who isn't afraid of temporarily using a lot of CPU resource would capture.

Use your operating system's existing logging mechanisms. Even if you go for sampling because systematic logging is too expensive, sample for (say) a full minute now and then (preferably at times when things happen).

For example, under Linux, use the audit subsystem (auditd) to log all calls to the execve system call. See Linux command logging? for a general overview and Simple example auditd configuration? for a how-to. Note that under Linux, the arguments of a process are publicly readable (except under some restrictive security settings with SELinux or similar high-security environments), but the environment variables can only be seen by other processes run by the same user. To get an idea of what is public and what isn't, run ls /proc/1.

Gilles 'SO- stop being evil'
  • 51,415
  • 13
  • 121
  • 180