Timeline Creation – Part 2 (Super Timeline)

As promised in my previous blog post I would be moving on to create a Super Timeline and my reasons for carrying this out after the filesystem timeline is purely down to the time it takes to process.

The super timeline is a suitable name as it is a very powerful analysis tool.  The problem with it though is the sheer amount of information it can contain! It is very important when working with a super timeline to have a pivot point to allow you to narrow down the time frame you are interested in.  If you are lucky this can be when your IDS fired or if not then by speaking to the victim.  But generally in the world of DFIR we are the last to know and it is weeks after the incident!

For this blog I am going to use an image that I created whilst doing some research into RDP sessions on an XP box.  I created the image within a VM and used FTK imager to acquire it into a .001 (DD)

Rather than mount it within the SIFT workstation I actually used FTK imager to mount the image as a physical/logical drive using the File system/Read only mount method.  The Drive letter which gets assigned is already shared within the SIFT workstation.

The first thing I want to do is gather the timeline data and write it into a bodyfile which we will be able to process further in a later step.  To do so I used a tool called log2timeline-sift which allows the analyst to automate the creation of a timeline.  Its a tool I was introduced to on my SANS 508 course and one I have enjoyed using.

As a sidenote the author of the tool Kristinn Gudjonsson has carried out a lot of  work in this area and has produced an excellent set of tools for timelining based around the python programming language.  I plan a future blog on my learning curve with his Plaso tool.

Back to the current task at hand though I need to create the bodyfile.  To do this I ran the following command within the SIFT workstation:

Before we move onto the next step I want to create a whitelist for the refining process to remove some files which can cause a lot of noise and may not be necessary depending on what incident you are dealing with.

To do so within the workstation I ran the following command which opened up a text document:

You can with experience, add or remove any items you feel benefit your current task.

It is at this point I want to process and refine the bodyfile and remove any duplicate entries or artefacts I have asked to be removed in my whitelist.  To do this I use a tool called l2t_process and the following is my command line and result:




As in a previous blog I am going to view this in Microsoft Excel but I am going to use the template available from SANS as this will add colour to different actions on the system eg USB insertions, Program activation etc.

To do this we open Excel on our host machine and on the data ribbon select “From Text” and I then selected the share available within \siftworkstation to my newly created timeline.csv.

For the options I leave it as delimited on the next screen I deselect tab and select comma and select finish. The next screen we click ok.

As I mentioned in my last blog I also like to freeze the top pane and turn on filtering and hide Time Zone, Host and Version.

What we are left with is a super timeline for that machine which we can now analyse.  In the last command line you also have the option to add a date range which I would suggest you do on a normal job.  I only chose not to for this blog as I was using an image that only had less than half an hour of system activity on it, yet after filtering still has 111158 lines in it!



Happy timelining!

10 comments to Timeline Creation – Part 2 (Super Timeline)

  • H. Carvey  says:

    Just out of curiosity, have you considered a more deliberate approach to creating a timeline, and less of an automatic method?

    • Chip_DFIR  says:

      Harlan as it happens I have recently been reading your book and was looking to use some of your techniques when I do my next timeline at work. This blog post was written back when I learned the timeline process but even that one allows you to timeline what you need and not necessarily automate. When I get some time I am also planning to look at PLASO and time sketch for their capabilities. As with everything though it’s finding time to get these things done!

  • H. Carvey  says:

    I see a lot of people using the automated processes, and not getting what they could out of them. It’s not the fault of the tools, it’s more about how they’re used.

    I’ve seen analysts get an image in and not be able to do anything immediately with it because their first step is to run the automated process, which can take several hours. In several cases, I’ve been told by other analysts that the process hadn’t completed after 5 hrs. This past Fri, I received images of two systems, and had my notes and findings in to the team lead in 4 hrs. During that exam, I also found that the last modification times on all of the user NTUSER.DAT files had been modified at about the same time, by some other process. That actually threw off an analyst who was looking at another system. The simple fact was that we were not interested in all of the user information, only the info from one profile.

    The process I use works extremely well for me; my requirements are to get comprehensive answers to the ‘client’ in a timely manner. The ‘client’ may be another analyst who provides me with a subset of files extracted from an image. The process works equally well for those situations where I do get full images, and also allows me to incorporate IOCs from previous engagements. I’m not opposed to other methods or processes, but while I’m digging into another tool and trying to learn the mindset behind it, and then how to modify it to meet my needs, I’m not doing the work that I already have. Further, when I talk to other analysts, one of more disheartening things I find out is that by running the automated process, they don’t have case notes, can’t remember specifics like where they found files, etc.

    • Chip_DFIR  says:

      We use examination diaries where I work and are all police so it tends to be second nature for us to record all actions. I totally understand where you are coming from though as having the tools is one thing but knowing their capabilities and also limitations is another. Also a knowledge of the artifacts on the system to aid the analysis of what your timeline presents you.

      A lot of that will come with experience and testing though which is why it is good to see people in the community like yourself writing blogs and publications getting that knowledge out there.

  • H. Carvey  says:

    For the purposes of full disclosure, I do see the same sort of thing with tools I’ve written, specifically RegRipper. Most of the analysts who download it simply run it as is, apparently hoping that it detects and parses everything they need it to, without ever understanding what it does, where the plugins come from, and not bothering to ask questions or say anything to the author when they don’t see something that they expect to see. Some of the analysts have gone on to give presentations at major conferences where they get up and say, “RegRipper doesn’t…”.

    So, I get that that’s how most analysts use tools…download stuff they’ve heard about, and if they ever actually run the tool, do so in a manner not quite as intended, and when something doesn’t work quite right, they simply move on.

  • H. Carvey  says:

    “…people in the community…writing blogs and publications getting that knowledge out there.”

    Unfortunately, the “community” is entirely too passive. It is said that many people read my blog, for example, but regardless of how far a post is spread, there’s very little reaction or engagement beyond maybe someone clicking “Like”. The end result is that I have no idea what it is that other analysts are interested in or want to see.

    • Chip_DFIR  says:

      Probably highly frustrating for you Harlan. Getting people to engage will always be a difficult thing as it takes effort that some may not be willing to put in. Personally I use my blog as a learning tool for myself and if it helps others then that’s a good thing but I know you are on a whole new level compared to me. I like to learn and your blog posts help me to understand and I suppose on a personal front I learn from you and others in the community and that sort of directs me to where I should study more. I don’t necessarily want to be spoonfed the knowledge though so maybe that’s why I don’t interact as much as I should but I will certainly try a little harder in the future to interact more.

      • H. Carvey  says:

        “Probably highly frustrating for you Harlan”

        I know that others have said the same thing, but I think that’s a result of assumption, and not actually asking me. To be honest, it’s not…because it doesn’t affect me. If I have a question for someone, I’ll ask. That others will choose not to do so doesn’t “frustrate” me.

        “… I know you are on a whole new level compared to me.”

        Not really. I’m no different from anyone else. Again, this is an assumption that you choose to make, and something that I neither look for, nor endorse.

        “…your blog posts help me to understand…”

        I’m glad that they can be of such use.

        • Chip_DFIR  says:

          Cheers Harlan and my apologies for assuming.

  • Resources, Link Mashup  says:

    […] a TimelineI was browsing around recently and ran across an older blog post (yeah, I know it’s like 18 months old…), and in the very beginning of that post, […]

Leave a reply

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>

This site uses Akismet to reduce spam. Learn how your comment data is processed.