Sunday, 14 August 2016

Engineering Student From Kerala Built A Working Ironman Suit In Just Rs 50,000!

If you are a fan of superheroes, it is highly likely that you may have fancied having your own Ironman suit.

So did Vimal Govind Manikandan, an engineering student from Kerala. But unlike most others, he has an Ironman suit of his own, because Vimal decided to make one for himself.
And it is no ordinary 'costume'. Vimal's suit is fully functional, weighs around 100 kg, and can lift up to 150kg thanks to battery-powered pressurised air chambers. But sorry folks, it can't fly).
Even though Vimal has no plans to turn himself into a superhero, he says the suit has future potential.
"Actually the future of this product is mainly in defence, industrial weight lifting, material handling etc," he said.
Vimal says he was inspired by Hollywood movies to build the robot, especially the suits in the Avengers movie. In fact, this isn’t even his first exosuit. His team built their first prototype back in 2015, which was much larger but mechanically-powered.
The young engineer and his team are now working on improving the prototype, especially to fix its walking ability, which Vimal admits, is restricted now.

Wednesday, 3 August 2016


Memory Layout of C Programs


A typical memory representation of C program consists of following sections.
1. Text segment
2. Initialized data segment
3. Uninitialized data segment
4. Stack
5. Heap

A typical memory layout of a running process
1. Text Segment:
A text segment , also known as a code segment or simply as text, is one of the sections of a program in an object file or in memory, which contains executable instructions.
As a memory region, a text segment may be placed below the heap or stack in order to prevent heaps and stack overflows from overwriting it.
Usually, the text segment is sharable so that only a single copy needs to be in memory for frequently executed programs, such as text editors, the C compiler, the shells, and so on. Also, the text segment is often read-only, to prevent a program from accidentally modifying its instructions.
2. Initialized Data Segment:
Initialized data segment, usually called simply the Data Segment. A data segment is a portion of virtual address space of a program, which contains the global variables and static variables that are initialized by the programmer.
Note that, data segment is not read-only, since the values of the variables can be altered at run time.
This segment can be further classified into initialized read-only area and initialized read-write area.
For instance the global string defined by char s[] = “hello world” in C and a C statement like int debug=1 outside the main (i.e. global) would be stored in initialized read-write area. And a global C statement like const char* string = “hello world” makes the string literal “hello world” to be stored in initialized read-only area and the character pointer variable string in initialized read-write area.
Ex: static int i = 10 will be stored in data segment and global int i = 10 will also be stored in data segment
3. Uninitialized Data Segment:
Uninitialized data segment, often called the “bss” segment, named after an ancient assembler operator that stood for “block started by symbol.” Data in this segment is initialized by the kernel to arithmetic 0 before the program starts executing
uninitialized data starts at the end of the data segment and contains all global variables and static variables that are initialized to zero or do not have explicit initialization in source code.
For instance a variable declared static int i; would be contained in the BSS segment.
For instance a global variable declared int j; would be contained in the BSS segment.
4. Stack:
The stack area traditionally adjoined the heap area and grew the opposite direction; when the stack pointer met the heap pointer, free memory was exhausted. (With modern large address spaces and virtual memory techniques they may be placed almost anywhere, but they still typically grow opposite directions.)
The stack area contains the program stack, a LIFO structure, typically located in the higher parts of memory. On the standard PC x86 computer architecture it grows toward address zero; on some other architectures it grows the opposite direction. A “stack pointer” register tracks the top of the stack; it is adjusted each time a value is “pushed” onto the stack. The set of values pushed for one function call is termed a “stack frame”; A stack frame consists at minimum of a return address.
Stack, where automatic variables are stored, along with information that is saved each time a function is called. Each time a function is called, the address of where to return to and certain information about the caller’s environment, such as some of the machine registers, are saved on the stack. The newly called function then allocates room on the stack for its automatic and temporary variables. This is how recursive functions in C can work. Each time a recursive function calls itself, a new stack frame is used, so one set of variables doesn’t interfere with the variables from another instance of the function.
5. Heap:
Heap is the segment where dynamic memory allocation usually takes place.
The heap area begins at the end of the BSS segment and grows to larger addresses from there.The Heap area is managed by malloc, realloc, and free, which may use the brk and sbrk system calls to adjust its size (note that the use of brk/sbrk and a single “heap area” is not required to fulfill the contract of malloc/realloc/free; they may also be implemented using mmap to reserve potentially non-contiguous regions of virtual memory into the process’ virtual address space). The Heap area is shared by all shared libraries and dynamically loaded modules in a process.
Memory Management in C

There are two ways in which memory can be allocated in C:

•           by declaring variables
•           by explicitly requesting space from C

We have discussed variable declaration in other lectures, but here we will describe requesting dynamic memory allocation and memory management.

C provides several functions for memory allocation and management:

•           malloc and calloc, to reserve space
•          realloc, to move a reserved block of memory to another allocation of different dimensions
•          free, to release space back to C

These functions can be found in the stdlib library

What happens when a pointer is declared?

Whenever a pointer is declared, all that happens is that C allocates space for the pointer.

For example,

char *p;

allocates 4 consecutive bytes in memory which are associated with the variable p. p’s type is declared to be of pointer to char. However, the memory location occupied by p is not initialised, so it may contain garbage.

It is often a good  idea to initialise the pointer at the time it is declared, to reduce the chances of a random value in p to be used as a memory address:

char *p = NULL;

At some stage during your program you may wish p to point to the location of some string

A common error is to simply copy the required string into p:

strcpy(p, “Hello”);

Often, this will result in a “Segmentation Fault”. Worse yet, the copy may actually succeed.

//a.c

#include <stdio.h>

main() {
 char *p;
 char *q = NULL;

 printf("Address of p = %u\n", p);
 strcpy(p, "Hello");
 printf("%s\n", p);
 printf("About to copy \"Goodbye\" to q\n");
 strcpy(q, "Goodbye");
 printf("String copied\n");
 printf("%s\n", q);
}

When p and q are declared, their memory locations contain garbage. However, the garbage value in p happens to correspond to a memory location that is not write protected by another process. So the strcpy is permitted.

By initialising q to NULL, we are ensuring that we cannot use q incorrectly. Trying to copy the string “Goodbye” into location 0 (NULL) results in a run-time Bus Error, and a program crash.

So, how can we use memory properly?

Before we can use a pointer, it must be pointing to a valid area of memory. We can use malloc to request a pointer to a block of memory (calloc to request an array of zero-value initialised blocks).


void *malloc(size_t byteSize)

void *calloc(size_t numElems, size_t byteSize)


//b.c

#include <stdio.h>
#include <stdlib.h>

main() {
 char *q = NULL;

 printf("Requesting space for \"Goodbye\"\n");

 q = (char *)malloc(strlen("Goodbye")+1);

 printf("About to copy \"Goodbye\" to q at address %u\n", q);
 strcpy(q, "Goodbye");
 printf("String copied\n");
 printf("%s\n", q);
}

How do we know if the memory allocation has been successful?


Malloc (and calloc) will return a non-NULL value if the request for space has been successful, and NULL if it fails. Using the result of malloc (or calloc) after it has failed to locate memory WILL result in a run-time program crash.

//c.c

#include <stdio.h>
#include <stdlib.h>

main() {
 char *q = NULL;

 printf("Requesting space for \"Goodbye\"\n");

 q = (char *)malloc(strlen("Goodbye")+1);

 if (!q) {
  perror("Failed to allocate space because");
  exit(1);
 }

 printf("About to copy \"Goodbye\" to q at address %u\n", q);
 strcpy(q, "Goodbye");
 printf("String copied\n");
 printf("%s\n", q);
}

The same applies to reading data from a file. If you use fscanf, etc., you must ensue that there is enough space to store the input, because the functions won’t

 

Freeing space


When space is allocated using the alloc family of functions, the space is allocated permanently until the program terminates, or it is freed.

Local variables are destroyed when their enclosing function terminates. Although the values are not necessarily overwritten, C may allocate the space to some other requesting process

//d.c

#include <stdio.h>

char *foo(char *);

main() {
 char *a = NULL;
 char *b = NULL;
 a = foo("Hi there, Chris");
 b = foo("Goodbye");

 printf("From main: %s %s\n", a, b);
}

char *foo(char *p) {
  char q[strlen(p)+1];
  strcpy(q, p);
  printf("From q: the string is %s\n", q);
  return q;
}

In this example, q is a variable local to foo. A string created in main is passed to foo and copy to q. The address of q is returned to main, where there is an attempt to preserve and use the strings. The result is disastrous.


//e.c

#include <stdio.h>
#include <stdlib.h>

char *foo(char *);

main() {
 char *a = NULL;
 char *b = NULL;
 a = foo("Hi there, Chris");
 b = foo("Goodbye");

 printf("From main: %s %s\n", a, b);
}

char *foo(char *p) {
  char *q = (char *)malloc(strlen(p)+1);
  strcpy(q, p);
  printf("From foo: the string is %s\n", q);
  return q;
}

In this example, however, the space is requested legitimately, and, although q, a local variable holding an address of the string, is destroyed when foo terminates, the string itself is preserved and can be used safely in the calling function.

The correct way to release the space is to use free().


//f.c

#include <stdio.h>
#include <stdlib.h>

char *foo(char *);

main() {
 char *a = NULL;
 char *b = NULL;
 a = foo("Hi there, Chris");
 free(a);
 b = foo("Goodbye");
 free(b); 
 printf("From main: %s %s\n", a, b);
}

char *foo(char *p) {
  char *q = (char *)malloc(strlen(p)+1);
  strcpy(q, p);
  printf("From foo: the string is %s\n", q);
  return q;
}

If free(b) is omitted, then “Goodbye” can be seen to be written to the location of “Hi there, Chris”.

The free function has the following syntax.

void free(void *ptr)

What happens when the amount of space allocated turns out to be too small?


void *realloc(void *oldptr, size_t newsize)

You’ve been accepting characters from the keyboard into some previously allocated bytes, but the user keeps typing characters and is going to overflow the memory allocation… what do you do?

What you’d want to do, of course, is keep track of the number of characters being written, and when you’re almost out of space, request a larger block from C, copy the old string into the new location, and free the space associated with the old string – which is practically what the realloc function does.


//g.c

#include <stdio.h>
#include <stdlib.h>

char *readline(char *, int *);
char *allocmem(char *, int);

main() {
 char *p = NULL;
 int max = 10;
 p = (char *)malloc(max);
if (!p) {
  perror("Memory allocation error 1");
  exit(1);
 }
 *p = ‘\0’;
 p = readline(p, &max);
 printf("User input\n%s\n", p);
}

char *readline(char *p, int *max) {
 char c;
 int count = strlen(p);
 while ((c = getchar()) != EOF) {
  if (count == (*max-1)) {
   *(p+(*max-1)) = '\0';
   *max += 10;
   p = allocmem(p, *max);
   if (!p) {
    perror("Memory allocation error 2");
    exit(1);
   }
  }
  count+=1;
  strncat(p, &c, 1);
 }
 return p;
}

char *allocmem(char *p, int max) {
  char *q = NULL;
  q =  (char *)realloc(p, max);
  if (!q) {
   perror("hi!");
   exit(1);
  }
  return q;
}

Memory Management and Data Structures


Reading keyboard text and keeping each line input in a linked list…

/*h.c. The program reads lines of input, and stores each line in a linked list. Eventually the list is printed */

#include <stdio.h>
#include <stdlib.h>

struct lineList {
 char *line;// a line of input
 struct lineList *nextLine; // pointer to the next line
};

// global variable pointing to head of the linked list
struct lineList *theHead = NULL;

char *readline(char *, int *, struct lineList *);
char *allocmem(char *, int);
struct lineList *makeElem(char *, struct lineList *);
void printList(struct lineList *);

main() {
 char *p = NULL;
 struct lineList *head = NULL;
 int max = 10; // initial size of input array
 extern struct lineList *theHead;

 p = (char *)malloc(max); // request space for the input array
 if (!p) {
  perror("Memory allocation error 1");
  exit(1);
 }
 *p = '\0'; // we use strlen later, so initialise input array
 p = readline(p, &max, head); // read all the input data
 printList(theHead); // print all the input data
}

char *readline(char *p, int *max, struct lineList *elem) {
 … // some code has been removed

  if (c == '\n') { // if a newline is encountered in the input
   elem = makeElem(p, elem); // copy the input line (p) to an element
   free(p); // we’re going to resize the input array p, to save space
   *max = 10; // set max to 10 again (same as in main())
   p = (char *)malloc(*max); // request space for the input array
// NB: the lines from free to here could have been replaced by
// p = (char *)realloc(p, 10);
// check that the request was successful (code not shown)

   *p = '\0'; // initialise array so string functions will work
   count = 0; // reset count
   continue;
  }


struct lineList *makeElem(char *p, struct lineList *elem) {
// add an element to the linked list
//
 struct lineList *temp = elem;
 struct lineList *head = elem;
 extern struct lineList *theHead;

 if (!head) { // if the linked list hasn’t been created yet
//request space for it
  head = (struct lineList *)malloc(sizeof(struct lineList));
  if (!head) {
   perror("Couldn't allocate space for head");
   exit(3);
  }
  theHead = head; // set the global variable
//request space to the input line
  head->line = (char *)malloc(strlen(p));
  if (!head->line) {
   printf("Couldn't allocate space for %s because", p);
   perror("");
   exit(4);
  }
// copy the input line to the element
  strcpy(head->line, p);
  head->nextLine = NULL; // set the pointer to next element to NULL
  return head;
 }
// otherwise, if the linked list exists already
// look for the last element in the list
 while (elem) {
  temp = elem;
  elem = temp->nextLine;
 }

// create a new element, storing its address in the old last element
 temp->nextLine = (struct lineList *)malloc(sizeof(struct lineList));
 if (!temp->nextLine) {
  perror("Failed to allocate list head");
  exit(2);
 }
// request space to store the input line in the element
 temp->nextLine->line = (char *)malloc(strlen(p));
 if (!temp->nextLine->line) {
   printf("Couldn't allocate space for %s because", p);
   perror("");
   exit(4);
  }
//copy the input line to the new element
 strcpy(temp->nextLine->line, p);
 temp->nextLine->nextLine = NULL;
 return head;
}


// print the lines in the linked list, starting from the first element
void printList(struct lineList *head) {
 struct lineList *curr = head;

// loop while the address of the element is not NULL
// NULL indicates the end of the linked list
 while (curr) {
  printf("%s\n", curr->line);
  curr = curr->nextLine;
 }
}

Mutex vs Semaphore

What are the differences between Mutex vs Semaphore? When to use mutex and when to use semaphore?
Concrete understanding of Operating System concepts is required to design/develop smart applications. Our objective is to educate  the reader on these concepts and learn from other expert geeks.
As per operating system terminology, mutex and semaphore are kernel resources that provide synchronization services (also called as synchronization primitives). Why do we need such synchronization primitives? Won’t be only one sufficient? To answer these questions, we need to understand few keywords. Please read the posts onatomicity and critical section. We will illustrate with examples to understand these concepts well, rather than following usual OS textual description.
The producer-consumer problem:
Note that the content is generalized explanation. Practical details vary with implementation.
Consider the standard producer-consumer problem. Assume, we have a buffer of 4096 byte length. A producer thread collects the data and writes it to the buffer. A consumer thread processes the collected data from the buffer. Objective is, both the threads should not run at the same time.
Using Mutex:
A mutex provides mutual exclusion, either producer or consumer can have the key (mutex) and proceed with their work. As long as the buffer is filled by producer, the consumer needs to wait, and vice versa.
At any point of time, only one thread can work with the entire buffer. The concept can be generalized using semaphore.
Using Semaphore:
A semaphore is a generalized mutex. In lieu of single buffer, we can split the 4 KB buffer into four 1 KB buffers (identical resources). A semaphore can be associated with these four buffers. The consumer and producer can work on different buffers at the same time.
Misconception:
There is an ambiguity between binary semaphore and mutex. We might have come across that a mutex is binary semaphore. But they are not! The purpose of mutex and semaphore are different. May be, due to similarity in their implementation a mutex would be referred as binary semaphore.
Strictly speaking, a mutex is locking mechanism used to synchronize access to a resource. Only one task (can be a thread or process based on OS abstraction) can acquire the mutex. It means there is ownership associated with mutex, and only the owner can release the lock (mutex).
Semaphore is signaling mechanism (“I am done, you can carry on” kind of signal). For example, if you are listening songs (assume it as one task) on your mobile and at the same time your friend calls you, an interrupt is triggered upon which an interrupt service routine (ISR) signals the call processing task to wakeup.
General Questions:
1. Can a thread acquire more than one lock (Mutex)?
Yes, it is possible that a thread is in need of more than one resource, hence the locks. If any lock is not available the thread will wait (block) on the lock.
2. Can a mutex be locked more than once?
A mutex is a lock. Only one state (locked/unlocked) is associated with it. However, a recursive mutex can be locked more than once (POSIX complaint systems), in which a count is associated with it, yet retains only one state (locked/unlocked). The programmer must unlock the mutex as many number times as it was locked.
3. What happens if a non-recursive mutex is locked more than once.
Deadlock. If a thread which had already locked a mutex, tries to lock the mutex again, it will enter into the waiting list of that mutex, which results in deadlock. It is because no other thread can unlock the mutex. An operating system implementer can exercise care in identifying the owner of mutex and return if it is already locked by same thread to prevent deadlocks.
4. Are binary semaphore and mutex same?
No. We suggest to treat them separately, as it is explained signalling vs locking mechanisms. But a binary semaphore may experience the same critical issues (e.g. priority inversion) associated with mutex. We will cover these in later article.
A programmer can prefer mutex rather than creating a semaphore with count 1.
5. What is a mutex and critical section?
Some operating systems use the same word critical section in the API. Usually a mutex is costly operation due to protection protocols associated with it. At last, the objective of mutex is atomic access. There are other ways to achieve atomic access like disabling interrupts which can be much faster but ruins responsiveness. The alternate API makes use of disabling interrupts.
6. What are events?
The semantics of mutex, semaphore, event, critical section, etc… are same. All are synchronization primitives. Based on their cost in using them they are different. We should consult the OS documentation for exact details.
7. Can we acquire mutex/semaphore in an Interrupt Service Routine?
An ISR will run asynchronously in the context of current running thread. It is not recommended to query (blocking call) the availability of synchronization primitives in an ISR. The ISR are meant be short, the call to mutex/semaphore may block the current running thread. However, an ISR can signal a semaphore or unlock a mutex.
8. What we mean by “thread blocking on mutex/semaphore” when they are not available?
Every synchronization primitive has a waiting list associated with it. When the resource is not available, the requesting thread will be moved from the running list of processor to the waiting list of the synchronization primitive. When the resource is available, the higher priority thread on the waiting list gets the resource (more precisely, it depends on the scheduling policies).
9. Is it necessary that a thread must block always when resource is not available?
Not necessary. If the design is sure ‘what has to be done when resource is not available‘, the thread can take up that work (a different code branch). To support application requirements the OS provides non-blocking API.
For example POSIX pthread_mutex_trylock() API. When mutex is not available the function returns immediately whereas the API pthread_mutex_lock() blocks the thread till resource is available.