GIOS Midterm Practice Questions
Part 1
- What are the key roles of an operating system?
- Can you make distinction between OS abstractions, mechanisms, policies
- What does the principle of separation of mechanism and policy mean?
- What does the principle optimize for the common case mean?
- What happens during a user-kernel mode crossing?
- What are some of the reasons why user-kernel mode crossing happens?
- What is a kernel trap? Why does it happen? What are the steps that take place during a kernel trap?
- What is a system call? How does it happen? What are the steps that take place during a system call?
- Contrast the design decisions and performance tradeoffs among monolithic, modular and microkernel-based OS designs.
Part 2
- Process vs. thread, describe the distinctions. What happens on a process vs. thread context switch.
- Describe the states in a lifetime of a process?
- Describe the lifetime of a thread?
- Describe all the steps which take place for a process to transition form a waiting (blocked) state to a running (executing on the CPU) state.
- What are the pros-and-cons of message-based vs. shared-memory-based IPC.
- What are benefits of multithreading? When is it useful to add more threads, when does adding threads lead to pure overhead? What are the possible sources of overhead associated with multithreading?
- Describe the boss-worked multithreading pattern. If you need to improve a performance metric like throughput or response time, what could you do in a boss-worker model? What are the limiting factors in improving performance with this pattern?
- Describe the pipelined multithreading pattern. If you need to improve a performance metric like throughput or response time, what could you do in a pipelined model? What are the limiting factors in improving performance with this pattern?
- What are mutexes? What are condition variables? Can you quickly write the steps/code for entering/existing a critical section for problems such as reader/writer, reader/writer with selective priority (e.g., reader priority vs. writer priority)? What are spurious wake-ups, how do you avoid them, and can you always avoid them? Do you understand the need for using a while() look for the predicate check in the critical section entry code examples in the lessons?
- What’s a simple way to prevent deadlocks? Why?
- Can you explain the relationship among kernel vs. user-level threads? Think though a general mxn scenario (as described in the Solaris papers), and in the current Linux model. What happens during scheduling, synchronization and signaling in these cases?
- Can you explain why some of the mechanisms described in the Solaris papers (for configuring the degree concurrency, for signaling, the use of LWP…) are not used or necessary in the current threads model in Linux?
- What’s an interrupt? What’s a signal? What happens during interrupt or signal handling? How does the OS know what to execute in response to a interrupt or signal? Can each process configure their own signal handler? Can each thread have their own signal handler?
- What’s the potential issue if a interrupt or signal handler needs to lock a mutex? What’s the workaround described in the Solaris papers?
- Contrast the pros-and-cons of a multithreaded (MT) and multiprocess (MP) implementation of a webserver, as described in the Flash paper.
- What are the benefits of the event-based model described in the Flash paper over MT and MP? What are the limitations? Would you convert the AMPED model into a AMTED (async multi-threaded event-driven)? How do you think ab AMTED version of Flash would compare to the AMPED version of Flash?
- There are several sets of experimental results from the Flash paper discussed in the lesson. Do you understand the purpose of each set of experiments (what was the question they wanted to answer)? Do you understand why the experiment was structured in a particular why (why they chose the variables to be varied, the workload parameters, the measured metric…).
- If you ran your server from the class project for two different traces: (i) many requests for a single file, and (ii) many random requests across a very large pool of very large files, what do you think would happen as you add more threads to your server? Can you sketch a hypothetical graph?