Navigation bar
  Start Previous page  12 of 35  Next page End Home  2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22  

Embedded Web Server for the CR16
National Semiconductor
Jeff Wright
Data Link
(Name Server)
HTTP 1.0
(Web Server)
Figure 4. Protocol Layer Model
Figure 5 illustrates the sequencing of the various layers during a typical HTTP request.  Task
priorities are indicated by the circled number in the upper left corner of each task box.  You may
infer (correctly) from the figure that these priorities somewhat follow the logical progression of a
typical segment as it proceeds up and down the protocol stack. Since the OS does not support a
“round-robin” (time-sharing) scheduling policy, each layer is assigned a unique priority.  Priorities
are assigned in a manner that seeks to maximize the efficiency of the sequencing process (minimize
response time) by vectoring received packets directly to the relevant layers.  Furthermore, since
“making the common case fast” is good engineering practice, emphasis is placed upon the application
layer (in this case HTTP) request handling.
Rather than keeping the layers suspended until they’re needed, all protocol layers run continually. 
(**This may change to further improve efficiency.)  Upon receiving control of the CPU from the
OS, each layer examines certain flags in its API, as well as its state variable and input/event flags
to determine what – if any, service it need perform.  If these indicators are such that no service is
required, the task will delay itself for one OS tick, yielding the CPU to the next layer.  Although
layer priorities could be dynamically modified to further improve sequencing efficiency, doing so
would have increased the OS kernel size, as well as packet processing latency.  As a compromise,
certain layers call the OS function OSTimeDlyResume, thereby allowing a previously delayed, but
required layer to run immediately.  Other sequences are possible, depending upon the nature of the
request. White Papers Previous page Top Next page