RTOS/OS Questions


Page fault

What is a page fault?

If the process is accessing a virtual memory location that is not mapped by the page table, a page fault will occur. Page faults could mean either that the process has tried to access memory that it should not have access to, or that part of the application's memory has been swapped out. In the last case, the page will be swapped back in and execution will proceed where it was interrupted

Boot sequence

 Boot loader/Boot strap loader/BIOS is a small program present in ROM.When a computer or any embedded system boots up, the first program/software that gets executed is the boot loader.The boot loader's address is mapped in hardware to a ROM chip.

The boot loader typically will have less instructions and its main purpose is to access non volatile or other memory to start and load the operating system or any other program which the system is designed for.

The Power On Self Test, or POST routine runs to find certain hardware and to test that the hardware is working at a basic level.
- Hardware devices are initialized. PCI devices need to be given IRQs, input/output addresses and a table of these devices is then displayed on-screen.
 The BIOS must search for an operating system on the various media that is supported. This usually occurs by searching for a boot sector on the devices that are specified in the CMOS.
- Once a valid boot sector has been found, it is copied into RAM and then executed.

Boot sector

A sector has a length of 512 bytes. A sector becomes a "boot" sector because of it's location and the hex value 0xaa55 in the final two bytes. The BIOS looks for this value when scanning potential boot media. When one creates a bootable floppy, the kernel created has a sector tacked
into the beginning of it. When this kernel is written to the beginning of a floppy, this first sector becomes the boot sector.

Linux kernel bootup

-----------------------

The boot manager nows loads the kernel image and possibly an initial RAM disk image. Once loaded into RAM, the kernel is executed and the setup code runs. The kernel must initialize any devices the system has. Even devices that have been initialized by the BIOS must be reinitialized. This provides portability and robustness by ensuring that each system has been initialized in a similar fashion,
independent of the BIOS. The next step that is performed by the setup code is switching the CPU from
Real Mode to Protected Mode.

Init
The first program that is run under the kernel is init. This program is always
process 1. The Linux kernel can be told which program to use as init by passing the
"init=" boot parameter. If this parameter is not specified, then the kernel will try to
execute '/sbin/init', '/etc/init', '/bin/init' or '/bin/sh' in that order. If none of
these exist, then the kernel will panic.
There are alternatives to using the init included with your system. You can
actually place any executable in place of init. If one was building an embedded system,
they could replace init with a program written in C that would run faster and could be
streamlined for the system. The script based startup used by most systems is much
easier to use, though, because of how easy it is to make changes

Check the site 

http://www.quickembeddedtips.com/


Inter process communication


- It is technique for the exchange of data among two or more threads in one or more processes. IPC techniques are semaphores,mutex, shared memory, signals, message passing, pipes, sockets and rpc calls.

Binary semaphore

The simplest kind of semaphore is the "binary semaphore", used to control access to a single resource, which is essentially the same as a mutex. A binary semaphore is always initialized with the value 1. When the resource is in use, the accessing thread calls P(S) to decrease this value to 0, and restores it to 1 with the V operation when the resource is ready to be freed.

The value of the semaphore is initialized to the number of equivalent shared resources it is implemented to control. In the special case where there is a single equivalent shared resource, the semaphore is called a binary semaphore. The general case semaphore is often called a counting semaphore.

When a signal is sent to a process, the operating system interrupts the processes' normal flow of execution. If the process has previously registered a signal handler, that routine is executed.Otherwise the default signal handler is executed.

Message passing - Communication is made by the sending of messages to recipients. Forms of messages include function invocation, signals, and data packets.


RTOS / General OS questions

Difference between mutex and semaphore

mutex take and give happens in the same task. we use it to protect our critical section... and with special features like priority inherence, task deletion safety
however binary semaphores are not just used for critical section protection but also for syncronisation....
meaning like raising an event so that the pending tasks wakes the moment the semaphore is available
isr will release a semaphore and a task waiting for it will unpend
so we can serialise the threads or tasks


1. what is a non re-entrant code?
-Reentrant code is code which does not rely on being executed without interruption before completion. Reentrant code can be used by multiple, simultaneous tasks. Reentrant code generally does not access global data. Variables within a reentrant function are allocated on the stack, so each instance of the function has its own private data. Non-reentrant code, to be used safely by multiple processes, should have access controlled via some synchronization method such as a semaphore.

2. how is RTOS different from other OS?
- A RTOS offers services that allow tasks to be performed within predictable timing constraints

3. Is unix a multitasking or multiprocessing operating system? whats the difference between the two?
- unix is a multitasking operating system, multiprocessing means it can run on multiple processors, the multiproceesing os coordinates with multiple processors running in parallel.

4. what is a core dump?
- A core dump is the recorded state of the working memory of a computer program at a specific time, generally when the program has terminated abnormally 
- includes the program counter and stack pointer, memory management information, and other processor and operating system flags and information
- a fatal error usually triggers the core dump, often buffer overflows, where a programmer allocates too little memory for incoming or computed data, or access to null pointers, a common coding error when an unassigned memory reference variable is accessed.

5. what is stack overflow and heap overflow?
- stack overflow occurs when when the program tries to access memory that is outside the region reserved for the call stack 
- call stack contains the subroutines called, the local variables 
- overflow occurs when too many functions are called,huge amount of local variables are allocated

6. windows also has multiple processes has process priotities switches between multiple process, how RTOS is different from that?
- RTOS has predictable timing constranints


7. how will u create a process in UNIX or our OS OSE?

- We can use thr fork system call to create a process in UNIX and in OSE the system call create_process is used.

8. what is a flat memory model and a shared memory model?
- in a flatmemory model the code and data segment occupies single address space.
- in a shared model the large memory is divided into different segments and needs a qualifier to identify each segment
- in a flat memory model the programmer doesnt need to switch for data and code

9. what is paging, segmentation y do we need it?

Paging
- Paging is a techinque where in the OS makes available the data required as quickly as possible. It stores some pages from the aux device to main memory and when a prog needs a page that is not on the main memory it fetches it from aux memory and replaces it in main memory. It uses specialised algorithms to choose which page to replace from in main memory.

Caching
- It deals with a concept where the data is temperorarily stored in a high speed memory for faster access. This data is duplicated in cache and the original data is stored in some aux memory. This concepts brings the average access time lower.

Segmentation
- Segmentation is a memory management scheme. This is the technique used for memory protection. Any accesses outside premitted area would result in segmentation fault.

Virtual Memory
- This technique enables non-contiguous memory to be accessed as if it were contiguous. Same as paging.


10. write a code to check wther a stack grows upwards or downwards?

define 2 local variables one after other and try to print the address
11. y do we require semaphore?
12. write a small piece of code protecting a shared memory variable with a semaphore?
13. what are the different types of semaphores and where they are used?
14. what are the different inter process communications?

15. what is present in .bss
- The bss section contains uninitialized data, and is allocated at run-time.  Until it is written to, it remains zeroed
16. what are the different segments of a program (code, data,stack,heap etc)

Object file or memory Segments:

Code segment
- This phrase used to refer to a portion of memory or of an object file that contains executable computer instructions. It is generally read-only segment.

data segment
- This is one of the sections of a program in an object file or in memory, which contains the global variables that are initialized by the programmer. It has a fixed size, since all of the data in this section is set by the programmer before the program is loaded. However, it is not read-only, since the values of the variables can be altered at runtime.

.bss
This segment of memory is part of data segment that contains uninitialized data (static variables). These are initialized to 0 and later assigned values during runtime.

stack segment
- This segment of memory is a special stack which stores information about the active subroutines of a task. It contains the return address to be branched to after a sub-routine has finished execution. It contains local variables of the sub-routines and the parameters that are passed to those sub-routines.

heap segment
- The segment of memory which is used for dynamic memory allocaton is known as heap. It is the responsibility of the programmer to deallocate it after its use. Or alternatively it will be garbage collected by the OS.


17. what is OS scheduling mechanism in our OSE?
18. how and wher the context related information is stored PCB i guess
19. what is a make command and what are al its uses?
20. what if no process responding how it is handled (watch dog timeout mechanism)

21. what is an ELF 
- Executable and Linking Format is a common standard file format for executables, object code, shared libraries, and core dumps.

22. what is priority inversion?
- In a scenario where a low priority task holds a shared resource (example semaphore) that is required by a high priority task. This causes the execution of the high priority task to be blocked until the low priority task has released the resource. This scenario is averted by the OS by increasing the priority of the low-proi process until it completes the task and releases the resources.

23. Locality of reference:
It deals with a process accessing a resource multiple times.
There are three type of localities namely 
temporal - A resource is referenced at point in time and will be referenced again in near future.
spatial - The concept that the likelihood of referencing a resource is higher if a resource near it has been referenced.  
sequencial - The concept of accessing the memory sequentially.

Compilation process
- In the first stage C source code is compiled and assembly code (.s) is generated, in the second stage the assembly code is converted into object code (.o) which is machine understandable. In third stage the linking will occur wherein the object code in linked to code libraries. In this stage the executable is generated.

make and makefile 
- see tutorial http://www.eng.hawaii.edu/Tutor/Make/index.html

CC
Contains the current C compiler. Defaults to cc.
CFLAGS
Special options which are added to the built-in C rule. (See next page.)
$@
Full name of the current target.
$?
A list of files for current dependency which are out-of-date.
$<
The source file of the current (single) dependency.

Short notes on interrupt handing


1) An interrupt is generated by some HW source or SW source (timer or software event).
2) CPU invokes the kernal interrupt handler.
3) Kernal interrupt handler invokes the vector handler which returns the vector number. (Vector handler has the mapping from vector number to interrupt source).
4) Kernal invokes the mask handler which masks all existing equal or low priority interrupts.
5) In interrupt routine associated with the vector number is invoked.
6) Kernal invokes the mask handler again to restore old mask.

RTOS vs General purpose OS


  •  The biggest difference is determinacy. An RTOS will have a deterministic scheduler. For any given set of tasks your process will always execute every number of microseconds or milliseconds exactly, and the same number from schedule to schedule. operating system services consume only known and expected amounts of time
  • In UNIX or Windows the scheduler are usually soft-real-time (as opposed to some hard-real-time RTOS). Soft-real-time means that the scheduler tries to assure your process runs every X number of milliseconds, but may fail to do so on some occasions.
  •     Modern RTOSs simply make sure that a) no interrupt is ever lost, and b) no interrupt can be blocked by a lower priority process.
  •    RTOSs allow fast and efficient context switching with very less overhead.

 What is scope of a variable

- Global Scope

- Scope Inside Blocks

- Automatic: auto

-Optimization Hint: register

-Static Storage: static

-External References: extern

What are storage types

-There are 5 kind of storage classes  
auto - : like int 
extern-: used by other class or programm 
static -: retain the value through out the executionof the programm 
register-: First memory storage.Secquence of memory allocation is 
Register--> Cache--> RAM 

Thread pooling

Thread pooling

Thread pool is a collection of managed threads usually organized in a queue, which execute the tasks in the task queue.

Creating a new thread object every time you need something to be executed asynchronously is expensive. In a thread pool you would just add the tasks you wish to be executed asynchronously to the task queue and the thread pool takes care of assigning an available thread, if any, for the corresponding task. As soon as the task is completed, the the now available thread requests another task (assuming there is any left).

Different types of memory fragmentation

In computer storage, fragmentation is a phenomenon in which storage space is used inefficiently, reducing storage capacity and in most cases performance. The term is also used to denote the wasted space itself.

There are three different but related forms of fragmentation: external fragmentation, internal fragmentation, and data fragmentation. Various storage allocation schemes exhibit one or more of these weaknesses.

Internal fragmentation occurs when storage is allocated without ever intending to use it.[1] This space is wasted. While this seems foolish, it is often accepted in return for increased efficiency or simplicity. The term "internal" refers to the fact that the unusable storage is inside the allocated region but is not being used.

External fragmentation is the phenomenon in which free storage becomes divided into many small pieces over time.

Data fragmentation occurs when a piece of data in memory is broken up into many pieces that are not close together. It is typically the result of attempting to insert a large object into storage that has already suffered external fragmentation.

Spin lock, semaphore and Mutex


spinlock is a lock where the thread simply waits in a loop ("spins") repeatedly checking until the lock becomes available.
Its like busy-waiting. It is useful if the lock period is very short time. since it avoids the overhead of context switching.
So its mainly used in system kernels
The longer a lock is held by a thread, the greater the risk that it will be interrupted by the O/S scheduler while holding the lock. If this happens, other threads will be left "spinning" (repeatedly trying to acquire the lock), while the thread holding the lock is not making progress towards releasing it. The result is an indefinite postponement until the thread holding the lock can finish and release it

nice explanation on http://stackoverflow.com/questions/5869825/when-should-one-use-a-spinlock-instead-of-mutex

Mutex:

Is a key to a toilet. One person can have the key - occupy the toilet - at the time. When finished, the person gives (frees) the key to the next person in the queue.

Officially: "Mutexes are typically used to serialise access to a section of  re-entrant code that cannot be executed concurrently by more than one thread. A mutex object only allows one thread into a controlled section, forcing other threads which attempt to gain access to that section to wait until the first thread has exited from that section."
Ref: Symbian Developer Library

(A mutex is really a semaphore with value 1.)

Semaphore:

Is the number of free identical toilet keys. Example, say we have four toilets with identical locks and keys. The semaphore count - the count of keys - is set to 4 at beginning (all four toilets are free), then the count value is decremented as people are coming in. If all toilets are full, ie. there are no free keys left, the semaphore count is 0. Now, when eq. one person leaves the toilet, semaphore is increased to 1 (one free key), and given to the next person in the queue.

Officially: "A semaphore restricts the number of simultaneous users of a shared resource up to a maximum number. Threads can request access to the resource (decrementing the semaphore), and can signal that they have finished using the resource (incrementing the semaphore)."
Ref: Symbian Developer Library



Difference between mutex and binary semaphore
---------------------------------------------------
1. The ownership of a mutex is with the single thread, meaning the same thread which has acquire the lock can only release it.
whereas in semaphore any other thread or process can release the semaphore taken by a process/thread
2. If a process locking the semaphore is killed by any chance and another process waiting for the lock to be released, never gets notified.
Whereas in mutex, if another process is waiting for the lock, and the process acquired mutex got killed, the kernel informs
3. recursion is not possible with semaphore, meaning locking the same semaphore again by the same process will lead to deadlock condition.
but mutex recursion can be enabled. By which the same process can acquire the mutex lock any number of times
4.semaphore doesnt support priority inheritance, so shouldn't be preferred for RTOS

Goood example of semaphore


http://see.stanford.edu/materials/icsppcs107/23-Concurrency-Examples.pdf

Race conditions arise in software when separate processes or threads of execution depend on some shared state. Operations upon shared states are critical sections that must be mutually exclusive in order to avoid harmful collision between processes or threads that share those states.

Here is a simple example:

Let us assume that two threads T1 and T2 each want to increment the value of a global integer by one. Ideally, the following sequence of operations would take place:

  1. Integer i = 0; (memory)
  2. T1 reads the value of i from memory into register1: 0
  3. T1 increments the value of i in register1: (register1 contents) + 1 = 1
  4. T1 stores the value of register1 in memory: 1
  5. T2 reads the value of i from memory into register2: 1
  6. T2 increments the value of i in register2: (register2 contents) + 1 = 2
  7. T2 stores the value of register2 in memory: 2
  8. Integer i = 2; (memory)

Since all threads run in the same address space, they all have access to the same data and

variables. If two threads simultaneously attempt to update a global counter variable, it

is possible for their operations to interleave in such way that the global state is not

correctly modified. Although such a case may only arise only one time out of

thousands, a concurrent program needs to coordinate the activities of multiple threads

using something more reliable that just depending on the fact that such interference is

rare. The semaphore is designed for just this purpose.

A semaphore is somewhat like an integer variable, but is special in that its operations

(increment and decrement) are guaranteed to be atomic—you cannot be halfway

through incrementing the semaphore and be interrupted and waylaid by another thread

trying to do the same thing. That means you can increment and decrement the

semaphore from multiple threads without interference. By convention, when a

semaphore is zero it is "locked" or "in use". Otherwise, positive values indicate that the

semaphore is available. A semaphore will never have a negative value.

 

Deadlock

----------

Every thread is then waiting for an action to be taken by another thread— we call this situation "deadlock."



--



--

Comments

Popular Posts