Welcome, Linux Explorer!

These projects are designed to give you hands-on experience with the core concepts of Linux. Start with the basics to build your confidence before moving on to the projects.

πŸ–₯️ Getting Started: Setting Up Your Linux Environment

Before diving into Linux commands and projects, you’ll need a Linux system to experiment with. The easiest way to do this on a Windows computer is by using VirtualBox to run Ubuntu inside a virtual machine.

Your Challenge

Download and install VirtualBox on your Windows machine, then create a new virtual machine and install Ubuntu. There are many beginner-friendly guides and videos online to help you through the process. Rather than following only one tutorial step by step, try to explore, experiment, and resolve issues on your own. This is part of learning how Linux users think and solve problems.

Questions to Explore

  • Which version of Ubuntu should I install β€” Desktop or Server?
  • How much memory (RAM) and disk space should I allocate to my virtual machine?
  • What should I do if Ubuntu doesn’t start properly the first time?

Use these questions as opportunities to search online, read community forums, and test different approaches. Developing this problem-solving habit will help you far beyond this project.

Tip

Once you have Ubuntu running, take a few minutes to look around β€” open the terminal, browse the file explorer, and get comfortable with the environment. You’ll be using it throughout all the upcoming projects.

πŸ’‘ Linux Fundamentals

Before you start, it's helpful to understand a few key concepts that form the basis of a Linux system.

What is Linux?

At its heart, Linux is an operating system, just like Windows or macOS. It's the software that manages all the hardware and resources on your computer. Linux is a powerful and flexible system used in everything from smartphones and servers to supercomputers.

The Command Line, Terminal & Bash

Instead of clicking icons, you use a command line to tell the computer what to do. The terminal is the program that gives you a window to the command line. When you type a command into the terminal, a program called a shell processes it. Bash (Bourne Again SHell) is the most common shell on Linux. In simple terms: you use the terminal to access the command line, which is interpreted by the Bash shell.

The File System Hierarchy

Linux organizes all files and directories in a single tree-like structure, starting from the root directory, which is represented by a single forward slash (/). This structure is consistent across all Linux systems, making it easy to navigate.

Linux Directory Tree (Simplified)

/ (root) – The top-level directory of the Linux filesystem
β”œβ”€β”€ bin – Essential user binaries (commands like ls, cp, mv)
β”œβ”€β”€ boot – Boot loader files, kernel, etc
β”œβ”€β”€ dev – Device files (representing disks, USBs, etc.)
β”œβ”€β”€ etc – System configuration files and scripts
β”œβ”€β”€ home – User home directories
β”‚   β”œβ”€β”€ user1
β”‚   β”‚   β”œβ”€β”€ Desktop
β”‚   β”‚   β”œβ”€β”€ Documents
β”‚   β”‚   β”œβ”€β”€ Downloads
β”‚   β”‚   └── Pictures
β”‚   └── user2
β”‚       β”œβ”€β”€ Desktop
β”‚       β”œβ”€β”€ Documents
β”‚       β”œβ”€β”€ Downloads
β”‚       └── Pictures
β”œβ”€β”€ lib – Libraries and kernel modules for essential programs
β”œβ”€β”€ media – Mount points for removable media (USB drives, CDs, etc)
β”œβ”€β”€ opt – Optional or third-party software packages
β”œβ”€β”€ root – Home directory of the root (administrator) user
β”œβ”€β”€ run – Runtime data (process IDs, sockets, temporary files)
β”œβ”€β”€ sbin – Essential system binaries (for administration)
β”œβ”€β”€ srv – Data for system services (e.g., web or FTP servers)
β”œβ”€β”€ sys – Virtual filesystem with information about devices/drivers
β”œβ”€β”€ tmp – Temporary files (often cleared on reboot)
β”œβ”€β”€ usr – User programs, libraries, documentation, and utilities
β”‚   β”œβ”€β”€ bin – Non-essential user binaries
β”‚   β”œβ”€β”€ lib – Libraries for user programs
β”‚   └── share – Shared resources like docs, icons, etc.
└── var – Variable data (logs, mail, caches, databases, spool)
                        

Linux vs Windows File Systems

Unlike Linux, which uses a single root directory (/) for everything, Windows organizes files into separate drives (such as C:\, D:\, etc.). Each drive has its own root, and directories are separated with a backslash (\) instead of a forward slash (/).
For example, your documents might be stored under C:\Users\YourName\Documents in Windows, whereas on Linux they would be under /home/yourname/Documents.

Another difference is that in Linux, everything is treated as a file (even hardware devices like USBs or disks), while in Windows, devices are usually represented with drive letters. This unified design in Linux makes scripting, automation, and system management more consistent.

πŸš€ The Linux Command Line Sandbox

Before diving into the projects, let's get you comfortable with the most fundamental commands. Think of your home directory as a safe place to play and experiment. You can't break anything important here! πŸ˜‰

Step 1: Where are you?

First, let's find out your current location in the file system and who you are. This is a common first step for any task.

  • pwd: "Print Working Directory." It shows your exact location. It should be something like /home/your_username.
  • whoami: Shows your current username.
pwd
whoami

Run these commands. What do you see? This is your base camp!

Step 2: Looking around

The ls command is your flashlight in the dark. It lists the contents of a directory.

  • ls: Lists the files and folders in your current directory.
  • ls -l: Lists contents in a "long" format, showing more details like permissions, owner, size, and date.
  • ls -a: Shows all files, including hidden ones (which start with a dot, like .bashrc).
  • ls -R: Lists contents recursively, meaning it shows all files and subdirectories inside folders.
  • ls -lh: Long format with sizes in "human-readable" form (e.g., KB, MB).
  • ls -lt: Long format sorted by modification time (newest files first).
ls
ls -l
ls -a
ls -R
ls -lh
ls -lt

Practice using these flags and notice the differences:
- The first character in ls -l output shows d for directories and - for files.
- Try ls -R in a folder with subdirectories. What happens?
- How does ls -lt help you find recently modified files?

Step 3: Creating and moving things

Now, let's build something. You'll create a folder and then some files inside it, and practice moving them around.

  • mkdir: "Make directory." Creates a new folder.
  • cd: "Change directory." Moves you into a different folder.
  • touch: Creates a new, empty file.
  • mv: "Move." Moves files or directories to another location. It can also be used to rename files or folders.
# Create a practice folder and move into it
mkdir my_practice
cd my_practice

# Create two files
touch file1.txt file2.txt
ls

# Move file1.txt into a new folder
mkdir subfolder
mv file1.txt subfolder/
ls
ls subfolder

# Rename file2.txt to renamed.txt
mv file2.txt renamed.txt
ls

Try it out:
- What happened to file1.txt after using mv file1.txt subfolder/?
- Notice that mv is also used for renaming. How does it differ from moving?
- Can you create another file and move it into subfolder as practice?

Step 4: Cleaning up

When you're done experimenting, you can remove files and directories.

  • rm: "Remove." Deletes a file.
  • rmdir: "Remove directory." Deletes an empty folder.
  • rm -r: "Remove recursively." Deletes a directory and everything inside it (all files and subdirectories).
rm file1.txt
ls
cd ..
rmdir my_practice
# Example with rm -r
rm -r my_practice

After running these commands, can you explain what happened at each step? What does cd .. do? What happens if you try to run rmdir my_practice without first deleting file2.txt? Why is rm -r my_practice able to remove the whole folder even if it still contains files?

Step 5: Putting it all together

Now, try a quick challenge. Without looking at the previous steps, try to:

  1. Create a new directory named linux_fun.
  2. Go inside that directory.
  3. Create three empty files named notes.md, report.pdf, and image.png.
  4. List all the files to confirm they were created.
  5. Delete all the files and the directory.

If you can do this from memory, you're ready for the projects! You've mastered the basics. If not, don't worryβ€”just revisit the steps above until you feel confident.

πŸ“‚ Project 1: Automated File Organizer

Learn the fundamentals of directory and file manipulation. This project involves writing a simple script to organize files into folders based on their type.

Concepts you'll learn: whoami (check your current user), pwd (print current directory), ls (list files), mkdir (make directories), touch (create empty files), mv (move files), chmod (change permissions), and the basics of shell scripting.

  1. Check your environment. Before creating files and directories, it’s useful to confirm who you are and where you are in the system.
    • whoami: shows your current user.
    • pwd: prints the β€œpresent working directory” (the folder you are currently in).
    • ls: lists the files and folders in your current location.
    whoami
    pwd
    ls

    This helps you verify your current user, the folder you’re working in, and what files are there before starting.

  2. Create a project directory. Directories (or β€œfolders”) help keep files organized. The command below shows one way to do it in a single line, and then an alternative step-by-step method.

    Method 1: One-liner with &&

    mkdir file_sorter && cd file_sorter

    Here, mkdir (β€œmake directory”) creates the folder, and cd (β€œchange directory”) moves into it. The && means β€œonly run the second command if the first succeeds.”

    Method 2: Step by step

    mkdir file_sorter
    cd file_sorter

    This does the same thing, but separates the steps so you can clearly see the directory being created first, then moving into it.

    Tip: Run pwd again to confirm that you are now inside file_sorter.

  3. Create some dummy files. The touch command creates new, empty files. These act as placeholders so we can test our script later.
    touch report.doc notes.txt image.jpg data.csv archive.zip presentation.ppt

    Check your files with ls. You should see the six files you just created.

  4. Create the script file. Scripts are just text files containing commands to be executed in order. Use a text editor like nano or vim to create one.
    nano organize_files.sh
  5. Add the script content. This script creates folders and moves files into them based on their extension.
    • mkdir -p: creates directories (the -p ensures no error if they already exist).
    • mv: moves files into the correct folder.
    • ls -R: lists files recursively, showing the new structure.

    Follow these steps carefully to use nano if you are new:

    1. After running nano organize_files.sh, your terminal opens a blank editor.
    2. Use the keyboard to type or paste the script content exactly as shown below.
    3. To save your work:
      • Press Ctrl + O (the letter O, not zero) to write out the file.
      • Press Enter to confirm the file name (organize_files.sh).
    4. To exit nano, press Ctrl + X. This closes the editor and returns you to the terminal.
    #!/bin/bash
    # A simple script to organize files
    
    echo "Creating directories..."
    mkdir -p Documents Pictures Spreadsheets Archives Presentations
    
    echo "Organizing files..."
    mv *.doc *.ppt Presentations/
    mv *.txt Documents/
    mv *.jpg Pictures/
    mv *.csv Spreadsheets/
    mv *.zip Archives/
    
    echo "Done! Here is the new structure:"
    ls -R
                                

    After saving and exiting, your script is ready to be made executable and run.

  6. Make the script executable. By default, new files are just text. To run it as a program, you need to give it β€œexecute” permission using chmod.
    chmod +x organize_files.sh
  7. Run your script! Use ./ to run a script in the current folder.
    ./organize_files.sh

    You should see messages as the script runs, ending with a list of your organized folders and files. Try running ls or pwd afterwards to confirm your results.

πŸ” Project 2: Permissions Manager

Linux permissions determine who can read, write, or execute files and directories. By mastering this, you’ll understand how collaboration and security are enforced in Linux systems.

Concepts you'll learn: useradd, groupadd, chown, chmod, user/group ownership, octal vs. symbolic permissions. (Note: most commands here require sudo).

  1. Create a new group. This group will contain your project members.

    First use this command to display the current content of the /etc/group file on your Linux system. It will show all current groups:

    cat /etc/group

    Then, this command will create a new group called developers on your Linux system:

    sudo groupadd developers
  2. Create two new users. Alice (a developer) and Bob (a read-only user).

    First, run this command to print the list of all the current user accounts on your Linux system:

    getent passwd

    Then, run these two commands to create the two required user accounts:

    sudo useradd -m -g developers -s /bin/bash alice
    sudo useradd -m -s /bin/bash bob

    Both alice and bob were created with their own home directories (-m), login shells set to Bash (-s /bin/bash), with alice’s primary group explicitly set to developers (-g developers) while bob was assigned a private group named after himself by default

  3. Create a shared project directory.
    sudo mkdir /srv/project_alpha
  4. Set ownership. Assign root as the owner and developers as the group.
    sudo chown root:developers /srv/project_alpha
  5. Understand Linux file permissions (numbers & symbols).

    Every file or directory in Linux has three sets of permissions: owner (user), group, and others. Each set can have three rights:

    • r (read = 4) β†’ view a file, or list directory contents
    • w (write = 2) β†’ modify, create, or delete files
    • x (execute = 1) β†’ run a file, or enter a directory

    These numbers are added together to form a permission value for each category:

    Value Calculation Permissions
    74+2+1rwx β†’ full access
    64+2rw- β†’ read & write
    54+1r-x β†’ read & execute
    44r-- β†’ read only
    0none--- β†’ no access

    Example: 755 = 7 (owner: rwx), 5 (group: r-x), 5 (others: r-x).

  6. Set permissions for a shared project folder.

    By default, a new directory like /srv/project_alpha is owned by root and not writable by normal users. To allow both the developers group and the owner to fully manage it, while giving others read-only access, run:

    sudo chmod 775 /srv/project_alpha
    

    βœ” 775 = owner (rwx), group (rwx), others (r-x)
    βœ” 755 = owner full, group & others read-only (developers couldn’t write)
    βœ” 770 = owner & group full, others blocked completely
    βœ” 700 = private folder, only the owner can access

  7. Alternative syntax (symbolic mode).

    Instead of numbers, you can describe permissions with letters:

    sudo chmod u=rwx,g=rwx,o=rx /srv/project_alpha
    

    u = user/owner, g = group, o = others. Same as 775, but more descriptive.

  8. Test the permissions in practice.

    To confirm that our permissions work, we’ll switch users and try creating/reading files inside /srv/project_alpha. The su command (substitute user) lets you temporarily become another user without logging out. Adding - ensures you start a full login shell (with that user’s environment).

    # Switch to Alice (member of developers group)
    su - alice
    cd /srv/project_alpha
    touch dev_file.txt   # βœ… should work: Alice can write here
    
    # Switch to Bob (not in developers group by default)
    su - bob
    cd /srv/project_alpha
    touch readonly.txt   # ❌ should fail: Bob has only read/execute rights
    cat dev_file.txt     # βœ… should work: Bob can still read existing files
    

    βœ” Alice succeeds because she belongs to the developers group, which has full (rwx) access.
    βœ” Bob cannot create files since "others" only have read/execute (r-x) permission, but he can still open and view files.
    βœ” This test confirms that 775 is correctly allowing collaboration for developers while limiting everyone else to read-only access.

⏱️ Project 3: Process Control

Every program running on Linux is a process. Being able to start, find, monitor, and stop processes is a fundamental skill for troubleshooting and system administration.

Concepts you'll learn: ps (list processes), grep (filter output), kill (terminate processes), backgrounding with &.

  1. Start a simple background process.

    The command below runs sleep 1000, which tells the system to "do nothing" for 1000 seconds. Adding & runs it in the background, so your terminal is free for other commands. You’ll see a job number in [1] and the process ID (PID).

    sleep 1000 &
    
  2. Find the process.

    Use ps aux to list all processes. To avoid scrolling through hundreds of lines, pipe the output to grep sleep, which filters only lines containing β€œsleep”. In the output, the second column is the PIDβ€”the unique identifier of the process. Write it down.

    ps aux | grep sleep
    

    βœ… Expect to see a line like: user 12345 ... sleep 1000. Here, 12345 is the PID.

  3. Terminate the process.

    To stop the process, use kill followed by the PID you noted earlier. This sends a TERM signal, politely asking the process to exit. Replace [PID] with the actual number.

    kill [PID]
    

    βœ… No output means the signal was sent successfully. If the process refuses to exit, you can use kill -9 [PID] to force it (use cautiously).

  4. Verify the process is gone.

    Run the search command again. If the process terminated, you will no longer see your sleep entry. You may still see the grep sleep command itself, which is normalβ€”it appears because it matches the word β€œsleep”.

    ps aux | grep sleep
    

    βœ… Expect: your original sleep 1000 process is gone. If it still shows up, double-check the PID and try kill again.

🐞 Project 4: The Log Investigator

Troubleshooting is a key skill. In this project, you'll run a script designed to fail, investigate the error logs, and then fix the bug.

Concepts you'll learn: Reading logs, standard error/output, I/O redirection (`>`), basic debugging.

  1. Create the broken script. Create a file named broken_script.sh using nano.
    nano broken_script.sh
  2. Add the script content. This script tries to list the contents of a directory that doesn't exist, which will cause an error.
    #!/bin/bash
    # This script has a bug!
    
    echo "Starting backup process..."
    
    # Attempt to list files in a non-existent directory
    ls /non_existent_directory
    
    echo "Backup process finished."
                                
  3. Make the script executable.
    chmod +x broken_script.sh
  4. Run the script and redirect its output to a log file. This is the crucial part. > script.log sends the normal output (stdout) to the file, and 2>&1 sends the error output (stderr) to the same place.
    ./broken_script.sh > script.log 2>&1
  5. Investigate the log file. Use cat or less to view the log file's contents.
    cat script.log

    You will see the normal "Starting..." message, followed by an error like "ls: cannot access '/non_existent_directory': No such file or directory".

  6. Fix the bug. The error message tells you the problem. Edit the script (nano broken_script.sh) and change /non_existent_directory to a real directory, like /home or . (the current directory).
  7. Re-run and verify. Run the script again with redirection. This time, when you cat script.log, you should see no error message, only the success messages and the file listing. You've successfully debugged the script!

βš™οΈ Project 5: Service Management & Automation

In this project, you’ll practice managing background services and automating tasks. Services are essential for keeping applications running reliably, while automation ensures repetitive jobs happen without manual input. By the end, you’ll understand how to use systemctl for service management, journalctl for logs, and cron for automation.

Concepts you'll learn: Managing services with systemctl, viewing logs with journalctl, scheduling tasks with cron, and verifying service activity.

  1. Create a logging script. You’ll start by writing a simple script that generates log entries. Open a file named my_service.sh using nano:
    nano my_service.sh

    This script will simulate a background service by writing timestamps into a log file.

  2. Add script content. Paste the following into the file:
    #!/bin/bash
    echo "This is a log entry from my_service.sh at $(date)" | sudo tee -a /var/log/my_service.log
    

    Every time this script runs, it appends a new line with the current date and time into /var/log/my_service.log.

  3. Make the script executable. Without this step, you won’t be able to run the script directly.
    chmod +x my_service.sh
  4. Schedule the script with cron. Cron is a time-based job scheduler. You’ll use it to run the script automatically every minute.

    Open your crontab configuration:

    crontab -e

    Then add this line at the bottom (replace your_username with your actual username):

    * * * * * /home/your_username/my_service.sh

    This means: run the script every minute, indefinitely.

  5. Monitor the service activity. Wait 2–3 minutes, then check if logs are being written:
    cat /var/log/my_service.log

    You should see multiple log entries with timestamps. To monitor system logs live, use:

    journalctl -f

    -f makes journalctl behave like tail -f, showing new log lines as they appear.

  6. Clean up. Once you’ve confirmed the automation works, remove the cron job to stop the script from running every minute:
    crontab -r

    This deletes your crontab entirely. If you want finer control, instead run crontab -e and just remove the specific line.

βœ… Expected Outcome: After 2–3 minutes, your /var/log/my_service.log file will contain multiple log entries created automatically by cron. You’ll also have practiced service-like automation and learned how to monitor logs in real time.

πŸ“Š Project 6: Basic System Monitoring

Monitoring your system's resources is essential to troubleshoot performance issues and understand how your Linux system behaves. This project introduces tools for checking CPU, memory, and disk usage in real-time and via summary commands.

Concepts you'll learn: top, htop, df, du, and identifying high-resource processes.

  1. Monitor running processes with top or htop.

    top provides a real-time overview of running processes and resource usage. Focus on these key columns:

    • PID β†’ Process ID, useful for managing processes.
    • %CPU β†’ CPU usage of each process.
    • %MEM β†’ Memory usage of each process.

    Press q to exit. Alternatively, htop offers a more interactive interface (use arrow keys to navigate, F10 to quit). If not installed, run sudo apt-get install htop.

    top

    βœ… Expect: a live-updating list of processes, showing which programs are using the most CPU and memory.

  2. Check disk usage.

    Use df to see free space on disks and du to check how much space specific directories use.

    • df -h β†’ "human-readable" output, showing sizes in MB/GB instead of bytes.
    • du -sh /home/your_username β†’ total disk usage of your home directory.
    df -h
    du -sh /home/your_username

    βœ… Expect: df shows disk partitions and free space; du shows directory sizes, helping identify large folders.

  3. Put it all together: identify high-resource processes.

    Open two terminal windows. In the first, run a CPU-intensive command like:

    yes > /dev/null

    This command continuously outputs "y" to /dev/null, consuming CPU. In the second terminal, run top or htop and observe which process uses the most CPU.

    βœ… Expect: the yes process will appear at the top of top or htop sorted by CPU usage. Press Ctrl+C in the first terminal to stop it.

βœ… Key Takeaways: You now know how to:

  • Monitor real-time CPU and memory usage with top and htop.
  • Check available disk space and directory sizes with df and du.
  • Identify resource-hogging processes and respond accordingly.