Top 31 Io Multiplexing The 145 New Answer

You are looking for information, articles, knowledge about the topic nail salons open on sunday near me io multiplexing on Google, you do not find the information you need! Here are the best content compiled and compiled by the https://chewathai27.com/to team, along with other related topics such as: io multiplexing I O multiplexing epoll, io multiplexing in c, redis io multiplexing, what is the advantage of using select() system call in i/o multiplexing model? *, What is multiplexing, Signal driven I o model, File I/O, i/o multiplexing is used in networking applications in which scenarios? mcq

What is IO multiplexing?

I/O multiplexing means what it says – allowing the programmer to examine and block on multiple I/O streams (or other “synchronizing” events), being notified whenever any one of the streams is active so that it can process data on that stream.

Why I O multiplexing is required?

I/O multiplexing is typically used in networking applications in the following scenarios: When a client is handling multiple descriptors (normally interactive input and a network socket) When a client to handle multiple sockets at the same time (this is possible, but rare)

Why I O multiplexing is required explain select () and poll () functions for implementing I O multiplexing?

What we need is the capability to tell the kernel that we want to be notified if one or more I/O conditions are ready (i.e. input is ready to be read, or the descriptors is capable of taking more outputs). This capability is called I /O Multiplexing and is provided by the select and poll functions .

What is multiplexing with select?

I/O multiplexing—select()

Like asynchronous I/O, the select() API creates a common point to wait for multiple conditions at the same time. However, select() allows an application to specify sets of descriptors to see if the following conditions exist: There is data to be read. Data can be written.

What IO multiplexing in Unix?

I/O multiplexing is the capability to tell the kernel that we want to be notified if one or more I/O conditions are ready, like input is ready to be read, or descriptor is capable of taking more output. When client is handling multiple descriptors (like standard input and network socket).

What is I O multiplexing explain different types of synchronous and asynchronous I O models?

Synchronous I/O versus Asynchronous I/O

POSIX defines these two terms as follows: A synchronous I/O operation causes the requesting process to be blocked until that I/O operation completes. An asynchronous I/O operation does not cause the requesting process to be blocked.

What is signal driven I O?

Historically, this has been called asynchronous I/O, but the signal-driven I/O that we will describe is not true asynchronous I/O. The latter is normally defined as the process performing the I/O operation (say a read or write), with the kernel returning immediately after the kernel initiates the I/O operation.

What is blocking IO model?

With blocking I/O, when a client makes a request to connect with the server, the thread that handles that connection is blocked until there is some data to read, or the data is fully written. Until the relevant operation is complete that thread can do nothing else but wait.

What is TCP Echo client server?

TCP/UDP Echo Server using I/O Multiplexing. 7. A TCP based client/server system consisting of a server which responds to multiple clients and allows them to issue “ls” and “more” commands to view the directory information and view a file on the server machine.

Why poll is faster than select?

Unlike select , a caller no longer needs to reset the fds per call because poll will reset the ready flag of any unready fds. The complexity of the inner loop is O(n) where n is the number of fds to monitor. If the fds are {1, 10, 1023}, then the complexity is O(3).

What is the difference between polling and selecting?

The select() call has you create three bitmasks to mark which sockets and file descriptors you want to watch for reading, writing, and errors, and then the operating system marks which ones in fact have had some kind of activity; poll() has you create a list of descriptor IDs, and the operating system marks each of …

What are socket options?

In addition to binding a socket to a local address or connecting it to a destination address, application programs need a method to control the socket. For example, when using protocols that use time out and retransmission, the application program may want to obtain or set the time-out parameters.

What are the types of multiplexing?

What are the types of multiplexing?
  • Frequency-division multiplexing (FDM). …
  • Wavelength-division multiplexing (WDM). …
  • Time-division multiplexing (TDM). …
  • Code-division multiplexing (CDM). …
  • Space-division multiplexing (SDM). …
  • Polarization-division multiplexing (PDM).

What are multiplexers and demultiplexers?

Definition. Multiplexer refers to a type of combinational circuit that accepts multiple inputs of data but provides only a single output. The demultiplexer refers to the type of combinational circuit that accepts just a single input but directs it through multiple outputs. Technique of Conversion.

Where are multiplexers used?

Multiplexers are used in various applications wherein multiple-data need to be transmitted by using a single line.
  • Communication System. …
  • Computer Memory. …
  • Telephone Network. …
  • Transmission from the Computer System of a Satellite. …
  • Communication System. …
  • Arithmetic Logic Unit. …
  • Serial to Parallel Converter. …
  • Photo Credits.

What are I O models?

An Input-Output model represents the flow of money in an economy, primarily through the connection between industries; it shows the extent to which different industries are buying from and selling to one another in a particular geographic region.

What is Epoll in Linux?

epoll is a Linux kernel system call for a scalable I/O event notification mechanism, first introduced in version 2.5. 44 of the Linux kernel. Its function is to monitor multiple file descriptors to see whether I/O is possible on any of them.

How does TCP Client Server work?

The TCP/IP protocol allows systems to communicate even if they use different types of network hardware. For example, TCP, through an Internet connection, transmits messages between a system using Ethernet and another system using Token Ring. TCP controls the accuracy of data transmission.

What is iterative server?

An iterative server handles both the connection request and the transaction involved in the call itself. Iterative servers are fairly simple and are suitable for transactions that do not last long. However, if the transaction takes more time, queues can build up quickly.


what’s the difference between processes, threads, and io multiplexing?
what’s the difference between processes, threads, and io multiplexing?


Chi tiết bài học I/O Multiplexing – select()

  • Article author: vimentor.com
  • Reviews from users: 39022 ⭐ Ratings
  • Top rated: 4.8 ⭐
  • Lowest rated: 1 ⭐
  • Summary of article content: Articles about Chi tiết bài học I/O Multiplexing – select() Trong bài này, chúng ta sẽ phân tích và sử dụng hai system call I/O multiplexing có khả năng tương đương được dùng phổ biến là select() và poll(). Giải pháp … …
  • Most searched keywords: Whether you are looking for Chi tiết bài học I/O Multiplexing – select() Trong bài này, chúng ta sẽ phân tích và sử dụng hai system call I/O multiplexing có khả năng tương đương được dùng phổ biến là select() và poll(). Giải pháp … Vimentor chi tiết bài học I/O multiplexingvimentor, elearing, lesson detail multiplexing Linux
    Linux
    select
    poll
  • Table of Contents:

Linux System Program

IO Multiplexing – select()

Khóa học liên quan

Về vimentor

Hỗ trợ

Liên hệ với chúng tôi

Chi tiết bài học I/O Multiplexing - select()
Chi tiết bài học I/O Multiplexing – select()

Read More

io multiplexing

  • Article author: wiki.c2.com
  • Reviews from users: 25793 ⭐ Ratings
  • Top rated: 4.2 ⭐
  • Lowest rated: 1 ⭐
  • Summary of article content: Articles about io multiplexing Updating …
  • Most searched keywords: Whether you are looking for io multiplexing Updating
  • Table of Contents:
io multiplexing
io multiplexing

Read More

Chapter 6. I/O Multiplexing: The select and poll Functions – Shichao’s Notes

  • Article author: notes.shichao.io
  • Reviews from users: 12379 ⭐ Ratings
  • Top rated: 3.1 ⭐
  • Lowest rated: 1 ⭐
  • Summary of article content: Articles about Chapter 6. I/O Multiplexing: The select and poll Functions – Shichao’s Notes Updating …
  • Most searched keywords: Whether you are looking for Chapter 6. I/O Multiplexing: The select and poll Functions – Shichao’s Notes Updating
  • Table of Contents:
Chapter 6. I/O Multiplexing: The select and poll Functions - Shichao's Notes
Chapter 6. I/O Multiplexing: The select and poll Functions – Shichao’s Notes

Read More

I/O Multiplexing : the Select and Poll Functions

  • Article author: www.brainkart.com
  • Reviews from users: 8097 ⭐ Ratings
  • Top rated: 3.1 ⭐
  • Lowest rated: 1 ⭐
  • Summary of article content: Articles about I/O Multiplexing : the Select and Poll Functions Updating …
  • Most searched keywords: Whether you are looking for I/O Multiplexing : the Select and Poll Functions Updating Network Programming and Management : Application Development | I/O Multiplexing : the Select and Poll Functions | It is seen that the TCP client is handling two inputs at the same time: standard input and a TCP socket. It was found that when the client was blocked
  • Table of Contents:
I/O Multiplexing : the Select and Poll Functions
I/O Multiplexing : the Select and Poll Functions

Read More

I/O multiplexing—select()

  • Article author: www.ibm.com
  • Reviews from users: 8936 ⭐ Ratings
  • Top rated: 3.3 ⭐
  • Lowest rated: 1 ⭐
  • Summary of article content: Articles about I/O multiplexing—select() Updating …
  • Most searched keywords: Whether you are looking for I/O multiplexing—select() Updating Because asynchronous I/O provides a more efficient way to maximize your application resources, it is recommended that you use asynchronous I/O APIs rather than the select() API. However, your specific application design might allow select() to be used.
  • Table of Contents:
I/O multiplexing—select()
I/O multiplexing—select()

Read More

Chapter 6. I/O Multiplexing: The select and poll Functions – Shichao’s Notes

  • Article author: notes.shichao.io
  • Reviews from users: 16177 ⭐ Ratings
  • Top rated: 4.7 ⭐
  • Lowest rated: 1 ⭐
  • Summary of article content: Articles about Chapter 6. I/O Multiplexing: The select and poll Functions – Shichao’s Notes We want to be notified if one or more I/O conditions are ready (i.e., input is ready to be read, or the descriptor is capable of taking more output). This … …
  • Most searched keywords: Whether you are looking for Chapter 6. I/O Multiplexing: The select and poll Functions – Shichao’s Notes We want to be notified if one or more I/O conditions are ready (i.e., input is ready to be read, or the descriptor is capable of taking more output). This …
  • Table of Contents:
Chapter 6. I/O Multiplexing: The select and poll Functions - Shichao's Notes
Chapter 6. I/O Multiplexing: The select and poll Functions – Shichao’s Notes

Read More

io multiplexing

  • Article author: www.cs.toronto.edu
  • Reviews from users: 18189 ⭐ Ratings
  • Top rated: 5.0 ⭐
  • Lowest rated: 1 ⭐
  • Summary of article content: Articles about io multiplexing I/O Multiplexing. Haviland 7.1.6 … Nonblocking I/O Model application kernel no data ready data ready … descriptors in one of the sets is ready for I/O. …
  • Most searched keywords: Whether you are looking for io multiplexing I/O Multiplexing. Haviland 7.1.6 … Nonblocking I/O Model application kernel no data ready data ready … descriptors in one of the sets is ready for I/O.
  • Table of Contents:
io multiplexing
io multiplexing

Read More

I/O multiplexing—select()

  • Article author: www.ibm.com
  • Reviews from users: 35325 ⭐ Ratings
  • Top rated: 4.6 ⭐
  • Lowest rated: 1 ⭐
  • Summary of article content: Articles about I/O multiplexing—select() Because asynchronous I/O proves a more efficient way to maximize your application resources, it is recommended that you use asynchronous I/O APIs rather … …
  • Most searched keywords: Whether you are looking for I/O multiplexing—select() Because asynchronous I/O proves a more efficient way to maximize your application resources, it is recommended that you use asynchronous I/O APIs rather … Because asynchronous I/O provides a more efficient way to maximize your application resources, it is recommended that you use asynchronous I/O APIs rather than the select() API. However, your specific application design might allow select() to be used.
  • Table of Contents:
I/O multiplexing—select()
I/O multiplexing—select()

Read More

Network Programming: I/O Multiplexing | The Daily Programmer

  • Article author: www.thedailyprogrammer.com
  • Reviews from users: 17476 ⭐ Ratings
  • Top rated: 3.9 ⭐
  • Lowest rated: 1 ⭐
  • Summary of article content: Articles about Network Programming: I/O Multiplexing | The Daily Programmer I/O multiplexing is the capability to tell the kernel that we want to be notified if one or more I/O conditions are ready, like input is … …
  • Most searched keywords: Whether you are looking for Network Programming: I/O Multiplexing | The Daily Programmer I/O multiplexing is the capability to tell the kernel that we want to be notified if one or more I/O conditions are ready, like input is … I/O multiplexing is the capability to tell the kernel that we want to be notified if one or more I/O conditions are ready, like input is ready to be read, or descriptor is capable of taking more output.Network Programming: I/O Multiplexing
  • Table of Contents:
Network Programming: I/O Multiplexing | The Daily Programmer
Network Programming: I/O Multiplexing | The Daily Programmer

Read More

io multiplexing

  • Article author: wiki.c2.com
  • Reviews from users: 43140 ⭐ Ratings
  • Top rated: 4.8 ⭐
  • Lowest rated: 1 ⭐
  • Summary of article content: Articles about io multiplexing I/O multiplexing means what it says – allowing the programmer to examine and block on multiple I/O streams (or other “synchronizing” events), being notified … …
  • Most searched keywords: Whether you are looking for io multiplexing I/O multiplexing means what it says – allowing the programmer to examine and block on multiple I/O streams (or other “synchronizing” events), being notified …
  • Table of Contents:
io multiplexing
io multiplexing

Read More

IO multiplexing

  • Article author: programmer.ink
  • Reviews from users: 43369 ⭐ Ratings
  • Top rated: 3.1 ⭐
  • Lowest rated: 1 ⭐
  • Summary of article content: Articles about IO multiplexing catalogue 1. Several IO operating modes (1) Blocking wait 2 non blocking, busy polling 2 IO multiplexer select/poll/epoll The first type is … …
  • Most searched keywords: Whether you are looking for IO multiplexing catalogue 1. Several IO operating modes (1) Blocking wait 2 non blocking, busy polling 2 IO multiplexer select/poll/epoll The first type is … programmer,program,programming,programme,programming languages,programmer thinkcatalogue 1. Several IO operating modes (1) Blocking wait 2 non blocking, busy polling 2 IO multiplexer select/poll/epoll The first type is select/poll The second epoll Basic practice of IO multiplexing 3 select function select function definition File descriptor set fd_set operation function sUTF-8…
  • Table of Contents:

(1) Blocking wait

2 non blocking busy polling

The first type is selectpoll

The second epoll

Basic practice of IO multiplexing

select function definition

File descriptor set fd_set operation function

select workflow

select multiplex code

select advantages and disadvantages

poll function definition

Advantages and disadvantages of poll

Horizontal trigger & edge trigger

epoll_create function

epoll_ctl function

epoll_wait function

Three working modes of epoll

IO multiplexing
IO multiplexing

Read More


See more articles in the same category here: Chewathai27.com/to/blog.

Chi tiết bài học I/O Multiplexing

I/O Multiplexing – select()

Mô hình universal file I/O đề cập ở các bài trước đều chỉ thao tác với một mô tả file. Về nguyên lý hoạt động, mỗi system call file I/O khi được gọi sẽ block cho đến khi dữ liệu được gửi. Ví dụ, khi ta muốn đọc một file từ một pipe bằng system call read(), read() có thể block chương trình nếu tại thời điểm đó không có dữ liệu ở trong pipe; và write() sẽ block chương trình nếu như không có đủ bộ nhớ của pipe để lưu trữ dữ liệu được ghi vào.

Nếu ứng dụng của chúng ta chỉ cần làm việc với một mô tả file, và thời gian block khi đọc/ghi file không lớn thì chỉ cần dùng các file I/O system call read() và write() cơ bản. Tuy nhiên, trong thực tế chương trình có thể phải theo dõi nhiều mô tả file (ví dụ, một server cần phải phục vụ nhiều mô tả file từ các client socket). Nếu read() đang chờ đọc dữ liệu từ một mô tả file của một client chưa có dữ liệu, chương trình sẽ tiếp tục block trong khi các client khác đã gửi dữ liệu vào mô tả file và đang chờ được đọc.

Về góc độ lập trình, ta có thể đề xuất hai cách giải quyết vấn đề này như sau:

Nonblocking I/O: Có thể thiết lập một mô tả file ở chế độ nonblocking bằng cách bật cờ O_NONBLOCK của open() khi mở file. Khi đó các system call file I/O nếu thấy file đó chưa ở trạng thái sẵn sàng, nó sẽ return ngay lập tức và trả giá trị lỗi vào biến errno. Khi đó chúng ta sẽ biết là mô tả file đó chưa sẵn sàng để đọc/ghi, và sẽ làm các việc khác rồi quay lại thăm dò (polling) mô tả file đó sau. Tuy nhiên, phương pháp này cũng có hạn chế: nếu tần suất thăm dò file quá thưa thì thời gian trễ để chương trình thao tác file sẽ dài; còn nếu tần suất thăm dò quá dày sẽ gây lãng phí tài nguyên CPU của hệ thống.

Tạo một thread mới để thăm dò mô tả file: Tiến trình cha tạo ra một thread chỉ làm nhiệm vụ thăm dò mô tả file cần theo dõi, nó sẽ block cho đến khi mô tả file đó sẵn sàng. Phương pháp này khá chân phương và dễ làm, nhưng nếu chúng ta làm việc trên nhiều mô tả file thì sẽ phải tạo ra số thread bằng số mô tả file. Việc này sẽ gây tiêu tốn tài nguyên và làm chương trình trở nên phức tạp.

Vì sự hạn chế của hai phương pháp trên, chúng ta cần một giải pháp khác để kiểm tra tính sẵn sàng của nhiều mô tả file cùng một lúc. Và một trong số các phương pháp đơn giản và được dùng phổ biến trong các hệ thống nhúng Linux là I/O multiplexing.

Nguyên lý hoạt động của multiplexing I/O là: chương trình sẽ theo dõi nhiều mô tả file cùng một lúc và block cho đến khi một trong các mô tả file cần theo dõi sẵn sàng hoặc hết thời gian timeout thiết lập. Trong bài này, chúng ta sẽ phân tích và sử dụng hai system call I/O multiplexing có khả năng tương đương được dùng phổ biến là select() và poll().

Giải pháp select()

Select() system call có prototype như sau:

#include /* For portability */ #include int select(int nfds, fd_set *readfds, fd_set *writefds, fd_set *exceptfds, struct timeval *timeout);

Select() hoạt động bằng cách gom các mô tả file mà bạn muốn theo dõi vào một tập hợp (fd_set) và sẽ block chương trình cho đến khi một hoặc nhiều file trong tập hợp sẵn sàng. Select() trả về một số nguyên là số mô tả file đã sẵn sàng, hoặc 0 nếu xảy ra timeout, hoặc -1 nếu xảy ra lỗi.

Trong prototype trên, các đối số nfds, readfds, writefds và exceptfds được dùng để chỉ các mô tả file cần giám sát; timeout là thời gian thiết lập giới hạn mà select() sẽ block chương trình.

Tập hợp mô tả file (file descriptor sets)

Các mô tả file cần giám sát được chia thành 3 loại và truyền vào select() bằng cách lưu vào các tập hợp readfds, writefds và exceptfds:

Readfds: tập hợp mô tả file được kiểm tra trạng thái sẵn sàng đọc

Writefds: tập hợp mô tả file được kiểm tra tra trạng thái sẵn sàng ghi

Exceptfds: tập hợp mô tả file được kiểm tra nếu có điều kiện ngoại lệ (exceptional condition) xảy ra.

Đối số đầu tiên của select() là nfds được tính bằng số mô tả file cao nhất cần giám sát trong cả 3 tập hợp cộng thêm 1. Giá trị nfds được thêm vào nhằm giúp tăng hiệu năng cho select() hơn do kernel sẽ không cần kiểm tra các mô tả file lớn hơn nfds.

Thuật ngữ exceptional condition thường bị hiểu nhầm là có lỗi xảy ra với mô tả file. Thực ra ngoại lệ ở đây thường là sự thay đổi trạng thái của pseudoterminal hoặc out-of-band data ở socket. Trong giới hạn bài học về multiplexing I/O, chúng ta sẽ không đi quá sâu vào việc giải thích rõ hai thuật ngữ này, vì trong thực tế exceptfd không được sử dụng nhiều.

Ví dụ, nếu bạn muốn kiểm tra xem một hoặc nhiều mô tả file sẵn sàng đọc mà không cần block chương trình không thì chỉ cần truyền các mô tả file đó vào tập hợp readfds. Nếu select() return 0, nghĩa là hết thời gian timeout mà không có file nào sẵn sàng đọc. Nếu select() return lớn hơn 0 thì tại thời điểm đó có file sẵn sàng đọ và các file đó sẽ được lưu trong tập hợp readfds.

Các tập hợp readfds, writefds và exceptfds có kiểu dữ liệu fd_set là một kiểu dữ liệu trong Linux triển khai ở dạng bit mask. Tuy nhiên trong lập trình, chúng ta sẽ không thay đổi trực tiếp các tập hợp mô tả file này mà sẽ dùng các macro sau:

#include void FD_ZERO(fd_set *fdset); void FD_SET(int fd, fd_set *fdset); void FD_CLR(int fd, fd_set *fdset); int FD_ISSET(int fd, fd_set *fdset);

FD_ZERO: khởi tạo tập hợp mô tả file có địa chỉ fdset về trạng thái rỗng. Trước khi truyền các file cần theo dõi vào các tập hợp readfds, writefds, exeptfdsd, chúng ta phải dùng macro này để khởi tạo các tập hợp mô tả file này.

FD_SET: thêm mô tả file fd vào tập hợp mô tả file có địa chỉ fdset

FD_CLR: gỡ bỏ mô tả file fd khỏi tập hợp mô tả file có địa chỉ fdset

FD_ISSET: Kiểm tra xem mô tả file cần theo dõi có nằm trong tập hợp fdset không.

timeout

Đối số timeout là thời gian tối đa select() sẽ block chương trình nếu chưa có mô tả file nào sẵn sàng. Nếu timeout được thiết lập NULL, select() sẽ block vô hạn (trừ khi có signal xảy ra).

Timeout có kiểu dữ liệu cấu trúc như sau:

struct timeval { long tv_sec; /* Seconds */ long tv_usec; /* Microseconds */ };

Nếu cả 2 trường của timeout được thiết lập 0, select() sẽ không block chương trình; nó sẽ xem mô tả file nào trong các tập hợp sẵn sàng và return luôn. Timeout được thiết lập 0 trong trường hợp ta muốn thăm dò các mô tả file sẵn sàng chưa ngay tại thời điểm tức thời đó.

Ví dụ

Sau khi nắm được cách hoạt động và prototype của select(), chúng ta sẽ xét một ví dụ dưới đây sử dụng select() dưới đây để hiểu rõ hơn.

Trong ví dụ này, bạn sẽ viết một chương trình dùng select() để theo dõi mô tả file stdin. Dùng sysem call select() để block cho đến khi stdin có dữ liệu để đọc (nghĩa là cho đến khi bạn nhập từ bàn phím). Thời gian timeout là 5 giây, nếu sau 5 giây bạn không gõ vào bàn phím, select() sẽ return 0 và in lỗi ra màn hình. Nếu có dữ liệu nhập từ bàn phím thì chương trình sẽ in ra dữ liệu mà bạn đã nhập. Chương trình này chỉ theo dõi 1 mô tả file nên chưa phát huy được hết khả năng của select(), vì mục đích của nó là viết một chương trình đơn giản để hướng dẫn bạn cách sử dụng select().

Source code của chương trình như sau:

#include #include #include #include #define TIMEOUT 5 /* Timeout cho select() là 5s */ #define BUF_LEN 1024 /* buffer length */ int main (void) { struct timeval tv; fd_set readfds; int ret = -1; /* Khởi tạo tập hợp readfds và thêm mô tả file stdin vào readfds*/ FD_ZERO(&readfds); FD_SET(STDIN_FILENO, &readfds); /* Thiết lập timeout */ tv.tv_sec = TIMEOUT; tv.tv_usec = 0; /*Block stdin đến khi stdin sẵn sàng đọc*/ /*Tập hợp mô tả file writefds và exceptfds truyền vào NULL*/ ret = select (STDIN_FILENO + 1, &readfds, NULL, NULL, &tv); if (-1 == ret) { perror(“Select error.

” ); return 1; } else if (0 == ret) { printf(“Timeout after %d seconds.

” , TIMEOUT); return 0; } /* Kiểm tra xem stdin có nằm trong readfds không Nếu FD_ISSET trả về 1, stdin nằm trong readfds và stdin sẵn sàng đọc */ if (FD_ISSET(STDIN_FILENO, &readfds)) { char buf[BUF_LEN+1]; int len = -1; /* Đọc dữ liệu từ mô tả file của stdin */ len = read(STDIN_FILENO, buf, BUF_LEN); if (-1 == len) { perror(“Read fd error.

” ); return 1; } if(len) { buf[len] = ‘\0’ ; /*manual vì read() không thêm ký tự null vào cuối string*/ printf (“read: %s

” , buf); } return 0; } return 1; }

Bây giờ bạn compile và chạy chương trình nhé.

Chạy chương trình, sau đó không gõ gì cả, chương trình sẽ thoát sau 5 giây timeout:

Bây giờ bạn chạy chương trình, và gõ đoạn text bất kỳ vào bàn phím:

Kết luận

Multiplexing I/O nói chung và system call select() được sử dụng rất rất nhiều trong lập trình Linux. Vậy nên việc hiểu và biết cách sử dụng multiplexing I/O là điều kiện bắt buộc của tất cả các kỹ sư Linux. Trong bài sau, chúng ta sẽ tìm hiểu một kỹ thuật multiplexing I/O khác là poll().

Chapter 6. I/O Multiplexing: The select and poll Functions

Chapter 6. I/O Multiplexing: The select and poll Functions¶

When the TCP client is handling two inputs at the same time: standard input and a TCP socket, we encountered a problem when the client was blocked in a call to fgets (on standard input) and the server process was killed. The server TCP correctly sent a FIN to the client TCP, but since the client process was blocked reading from standard input, it never saw the EOF until it read from the socket (possibly much later).

We want to be notified if one or more I/O conditions are ready (i.e., input is ready to be read, or the descriptor is capable of taking more output). This capability is called I/O multiplexing and is provided by the select and poll functions, as well as a newer POSIX variation of the former, called pselect .

I/O multiplexing is typically used in networking applications in the following scenarios:

When a client is handling multiple descriptors (normally interactive input and a network socket)

When a client to handle multiple sockets at the same time (this is possible, but rare)

If a TCP server handles both a listening socket and its connected sockets

If a server handles both TCP and UDP

If a server handles multiple services and perhaps multiple protocols

I/O multiplexing is not limited to network programming. Many nontrivial applications find a need for these techniques.

We first examine the basic differences in the five I/O models that are available to us under Unix:

blocking I/O

nonblocking I/O

I/O multiplexing ( select and poll )

and ) signal driven I/O ( SIGIO )

) asynchronous I/O (the POSIX aio_ functions)

There are normally two distinct phases for an input operation:

Waiting for the data to be ready. This involves waiting for data to arrive on the network. When the packet arrives, it is copied into a buffer within the kernel. Copying the data from the kernel to the process. This means copying the (ready) data from the kernel’s buffer into our application buffer

Blocking I/O Model¶

The most prevalent model for I/O is the blocking I/O model (which we have used for all our examples in the previous sections). By default, all sockets are blocking. The scenario is shown in the figure below:

We use UDP for this example instead of TCP because with UDP, the concept of data being “ready” to read is simple: either an entire datagram has been received or it has not. With TCP it gets more complicated, as additional variables such as the socket’s low-water mark come into play.

We also refer to recvfrom as a system call to differentiate between our application and the kernel, regardless of how recvfrom is implemented (system call on BSD and function that invokes getmsg system call on System V). There is normally a switch from running in the application to running in the kernel, followed at some time later by a return to the application.

In the figure above, the process calls recvfrom and the system call does not return until the datagram arrives and is copied into our application buffer, or an error occurs. The most common error is the system call being interrupted by a signal, as we described in Section 5.9. We say that the process is blocked the entire time from when it calls recvfrom until it returns. When recvfrom returns successfully, our application processes the datagram.

Nonblocking I/O Model¶

When a socket is set to be nonblocking, we are telling the kernel “when an I/O operation that I request cannot be completed without putting the process to sleep, do not put the process to sleep, but return an error instead”. The figure is below:

For the first three recvfrom , there is no data to return and the kernel immediately returns an error of EWOULDBLOCK .

, there is no data to return and the kernel immediately returns an error of . For the fourth time we call recvfrom, a datagram is ready, it is copied into our application buffer, and recvfrom returns successfully. We then process the data.

When an application sits in a loop calling recvfrom on a nonblocking descriptor like this, it is called polling. The application is continually polling the kernel to see if some operation is ready. This is often a waste of CPU time, but this model is occasionally encountered, normally on systems dedicated to one function.

I/O Multiplexing Model¶

With I/O multiplexing, we call select or poll and block in one of these two system calls, instead of blocking in the actual I/O system call. The figure is a summary of the I/O multiplexing model:

We block in a call to select , waiting for the datagram socket to be readable. When select returns that the socket is readable, we then call recvfrom to copy the datagram into our application buffer.

Comparing to the blocking I/O model *¶

Comparing Figure 6.3 to Figure 6.1:

Disadvantage: using select requires two system calls ( select and recvfrom ) instead of one

requires two system calls ( and ) instead of one Advantage: we can wait for more than one descriptor to be ready (see the select function later in this chapter)

Multithreading with blocking I/O *¶

Another closely related I/O model is to use multithreading with blocking I/O. That model very closely resembles the model described above, except that instead of using select to block on multiple file descriptors, the program uses multiple threads (one per file descriptor), and each thread is then free to call blocking system calls like recvfrom .

The signal-driven I/O model uses signals, telling the kernel to notify us with the SIGIO signal when the descriptor is ready. The figure is below:

We first enable the socket for signal-driven I/O (Section 25.2) and install a signal handler using the sigaction system call. The return from this system call is immediate and our process continues; it is not blocked.

system call. The return from this system call is immediate and our process continues; it is not blocked. When the datagram is ready to be read, the SIGIO signal is generated for our process. We can either: read the datagram from the signal handler by calling recvfrom and then notify the main loop that the data is ready to be processed (Section 25.3) notify the main loop and let it read the datagram.

signal is generated for our process. We can either:

The advantage to this model is that we are not blocked while waiting for the datagram to arrive. The main loop can continue executing and just wait to be notified by the signal handler that either the data is ready to process or the datagram is ready to be read.

Asynchronous I/O Model¶

Asynchronous I/O is defined by the POSIX specification, and various differences in the real-time functions that appeared in the various standards which came together to form the current POSIX specification have been reconciled.

These functions work by telling the kernel to start the operation and to notify us when the entire operation (including the copy of the data from the kernel to our buffer) is complete. The main difference between this model and the signal-driven I/O model is that with signal-driven I/O, the kernel tells us when an I/O operation can be initiated, but with asynchronous I/O, the kernel tells us when an I/O operation is complete. See the figure below for example:

We call aio_read (the POSIX asynchronous I/O functions begin with aio_ or lio_ ) and pass the kernel the following: descriptor, buffer pointer, buffer size (the same three arguments for read ), file offset (similar to lseek ), and how to notify us when the entire operation is complete. This system call returns immediately and our process is not blocked while waiting for the I/O to complete.

We assume in this example that we ask the kernel to generate some signal when the operation is complete. This signal is not generated until the data has been copied into our application buffer, which is different from the signal-driven I/O model.

Comparison of the I/O Models¶

The figure below is a comparison of the five different I/O models.

The main difference between the first four models is the first phase, as the second phase in the first four models is the same: the process is blocked in a call to recvfrom while the data is copied from the kernel to the caller’s buffer. Asynchronous I/O, however, handles both phases and is different from the first four.

Synchronous I/O versus Asynchronous I/O¶

POSIX defines these two terms as follows:

A synchronous I/O operation causes the requesting process to be blocked until that I/O operation completes.

An asynchronous I/O operation does not cause the requesting process to be blocked.

Using these definitions, the first four I/O models (blocking, nonblocking, I/O multiplexing, and signal-driven I/O) are all synchronous because the actual I/O operation ( recvfrom ) blocks the process. Only the asynchronous I/O model matches the asynchronous I/O definition.

select Function¶

The select function allows the process to instruct the kernel to either:

Wait for any one of multiple events to occur and to wake up the process only when one or more of these events occurs, or

When a specified amount of time has passed.

This means that we tell the kernel what descriptors we are interested in (for reading, writing, or an exception condition) and how long to wait. The descriptors in which we are interested are not restricted to sockets; any descriptor can be tested using select .

#include #include int select ( int maxfdp1 , fd_set * readset , fd_set * writeset , fd_set * exceptset , const struct timeval * timeout ); /* Returns: positive count of ready descriptors, 0 on timeout, –1 on error */

The timeout argument *¶

The timeout argument tells the kernel how long to wait for one of the specified descriptors to become ready. A timeval structure specifies the number of seconds and microseconds.

struct timeval { long tv_sec ; /* seconds */ long tv_usec ; /* microseconds */ };

There are three possibilities for the timeout:

Wait forever (timeout is specified as a null pointer). Return only when one of the specified descriptors is ready for I/O. Wait up to a fixed amount of time (timeout points to a timeval structure). Return when one of the specified descriptors is ready for I/O, but do not wait beyond the number of seconds and microseconds specified in the timeval structure. Do not wait at all (timeout points to a timeval structure and the timer value is 0, i.e. the number of seconds and microseconds specified by the structure are 0). Return immediately after checking the descriptors. This is called polling.

Note:

The wait in the first two scenarios is normally interrupted if the process catches a signal and returns from the signal handler. For portability, we must be prepared for select to return an error of EINTR if we are catching signals. Berkeley-derived kernels never automatically restart select .

to return an error of if we are catching signals. Berkeley-derived kernels never automatically restart . Although the timeval structure has a microsecond field tv_usec , the actual resolution supported by the kernel is often more coarse. Many Unix kernels round the timeout value up to a multiple of 10 ms. There is also a scheduling latency involved, meaning it takes some time after the timer expires before the kernel schedules this process to run.

structure has a microsecond field , the actual resolution supported by the kernel is often more coarse. Many Unix kernels round the timeout value up to a multiple of 10 ms. There is also a scheduling latency involved, meaning it takes some time after the timer expires before the kernel schedules this process to run. On some systems, the timeval structure can represent values that are not supported by select ; it will fail with EINVAL if the tv_sec field in the timeout is over 100 million seconds.

structure can represent values that are not supported by ; it will fail with if the field in the timeout is over 100 million seconds. The const qualifier on the timeout argument means it is not modified by select on return.

The descriptor sets arguments *¶

The three middle arguments, readset, writeset, and exceptset, specify the descriptors that we want the kernel to test for reading, writing, and exception conditions. There are only two exception conditions currently supported:

The arrival of out-of-band data for a socket.

The presence of control status information to be read from the master side of a pseudo-terminal that has been put into packet mode. (Not covered in UNP)

select uses descriptor sets, typically an array of integers, with each bit in each integer corresponding to a descriptor. For example, using 32-bit integers, the first element of the array corresponds to descriptors 0 through 31, the second element of the array corresponds to descriptors 32 through 63, and so on. All the implementation details are irrelevant to the application and are hidden in the fd_set datatype and the following four macros:

void FD_ZERO ( fd_set * fdset ); /* clear all bits in fdset */ void FD_SET ( int fd , fd_set * fdset ); /* turn on the bit for fd in fdset */ void FD_CLR ( int fd , fd_set * fdset ); /* turn off the bit for fd in fdset */ int FD_ISSET ( int fd , fd_set * fdset ); /* is the bit for fd on in fdset ? */

We allocate a descriptor set of the fd_set datatype, we set and test the bits in the set using these macros, and we can also assign it to another descriptor set across an equals sign (=) in C.

An array of integers using one bit per descriptor, is just one possible way to implement select . Nevertheless, it is common to refer to the individual descriptors within a descriptor set as bits, as in “turn on the bit for the listening descriptor in the read set.”

The following example defines a variable of type fd_set and then turn on the bits for descriptors 1, 4, and 5:

fd_set rset ; FD_ZERO ( & rset ); /* initialize the set: all bits off */ FD_SET ( 1 , & rset ); /* turn on bit for fd 1 */ FD_SET ( 4 , & rset ); /* turn on bit for fd 4 */ FD_SET ( 5 , & rset ); /* turn on bit for fd 5 */

It is important to initialize the set, since unpredictable results can occur if the set is allocated as an automatic variable and not initialized.

Any of the middle three arguments to select , readset, writeset, or exceptset, can be specified as a null pointer if we are not interested in that condition. Indeed, if all three pointers are null, then we have a higher precision timer than the normal Unix sleep function. The poll function provides similar functionality.

The maxfdp1 argument *¶

The maxfdp1 argument specifies the number of descriptors to be tested. Its value is the maximum descriptor to be tested plus one. The descriptors 0, 1, 2, up through and including maxfdp1–1 are tested.

The constant FD_SETSIZE , defined by including , is the number of descriptors in the fd_set datatype. Its value is often 1024, but few programs use that many descriptors.

The reason the maxfdp1 argument exists, along with the burden of calculating its value, is for efficiency. Although each fd_set has room for many descriptors, typically 1,024, this is much more than the number used by a typical process. The kernel gains efficiency by not copying unneeded portions of the descriptor set between the process and the kernel, and by not testing bits that are always 0.

readset, writeset, and exceptset as value-result arguments *¶

select modifies the descriptor sets pointed to by the readset, writeset, and exceptset pointers. These three arguments are value-result arguments. When we call the function, we specify the values of the descriptors that we are interested in, and on return, the result indicates which descriptors are ready. We use the FD_ISSET macro on return to test a specific descriptor in an fd_set structure. Any descriptor that is not ready on return will have its corresponding bit cleared in the descriptor set. To handle this, we turn on all the bits in which we are interested in all the descriptor sets each time we call select.

Return value of select *¶

The return value from this function indicates the total number of bits that are ready across all the descriptor sets. If the timer value expires before any of the descriptors are ready, a value of 0 is returned. A return value of –1 indicates an error (which can happen, for example, if the function is interrupted by a caught signal).

Conditions for a Ready Descriptor¶

Previous sections discusses waiting for a descriptor to become ready for I/O (reading or writing) or to have an exception condition pending on it (out-of-band data). The following discussion are specific about the conditions that cause select to return “ready” for sockets

A socket is ready for reading if any of the following four conditions is true: The number of bytes of data in the socket receive buffer is greater than or equal to the current size of the low-water mark for the socket receive buffer. A read operation on the socket will not block and will return a value greater than 0 (i.e., the data that is ready to be read). We can set this low-water mark using the SO_RCVLOWAT socket option. It defaults to 1 for TCP and UDP sockets.

socket option. It defaults to 1 for TCP and UDP sockets. The read half of the connection is closed (i.e., a TCP connection that has received a FIN). A read operation on the socket will not block and will return 0 (i.e., EOF).

The socket is a listening socket and the number of completed connections is nonzero.

A socket error is pending. A read operation on the socket will not block and will return an error (–1) with errno set to the specific error condition. These pending errors can also be fetched and cleared by calling getsockopt and specifying the SO_ERROR socket option. A socket is ready for writing if any of the following four conditions is true: The number of bytes of available space in the socket send buffer is greater than or equal to the current size of the low-water mark for the socket send buffer and either: (i) the socket is connected, or (ii) the socket does not require a connection (e.g., UDP). This means that if we set the socket to nonblocking (Chapter 16), a write operation will not block and will return a positive value (e.g., the number of bytes accepted by the transport layer). We can set this low-water mark using the SO_SNDLOWAT socket option. This low-water mark normally defaults to 2048 for TCP and UDP sockets.

socket option. This low-water mark normally defaults to 2048 for TCP and UDP sockets. The write half of the connection is closed. A write operation on the socket will generate SIGPIPE (Section 5.12).

(Section 5.12). A socket using a non-blocking connect has completed the connection, or the connect has failed.

A socket error is pending. A write operation on the socket will not block and will return an error (–1) with errno set to the specific error condition. These pending errors can also be fetched and cleared by calling getsockopt with the SO_ERROR socket option. A socket has an exception condition pending if there is out-of-band data for the socket or the socket is still at the out-of-band mark (Chapter 24).

When an error occurs on a socket, it is marked as both readable and writable by select.

The purpose of the receive and send low-water marks is to give the application control over how much data must be available for reading or how much space must be available for writing before select returns a readable or writable status. For example, if we know that our application has nothing productive to do unless at least 64 bytes of data are present, we can set the receive low-water mark to 64 to prevent select from waking us up if less than 64 bytes are ready for reading.

As long as the send low-water mark for a UDP socket is less than the send buffer size (which should always be the default relationship), the UDP socket is always writable, since a connection is not required.

The following table is the summary of conditions that cause a socket to be ready for select.

Condition Readable? Writable? Exception Data to read x Read half of the connection closed x New connection ready for listening socket x Space available for writing x Write half of the connection closed x Pending error x x TCP out-of-band data x

Maximum Number of Descriptors for select ¶

Most applications do not use lots of descriptors. It is rare to find an application that uses hundreds of descriptors, but such applications do exist, and they often use select to multiplex the descriptors.

When select was originally designed, the OS normally had an upper limit on the maximum number of descriptors per process (the 4.2BSD limit was 31), and select just used this same limit. But, current versions of Unix allow for a virtually unlimited number of descriptors per process (often limited only by the amount of memory and any administrative limits), which affects select .

Many implementations have declarations similar to the following, which are taken from the 4.4BSD header:

/* * Select uses bitmasks of file descriptors in longs. These macros * manipulate such bit fields (the filesystem macros use chars). * FD_SETSIZE may be defined by the user, but the default here should * be enough for most uses. */ #ifndef FD_SETSIZE #define FD_SETSIZE 256 #endif

This makes us think that we can just #define FD_SETSIZE to some larger value before including this header to increase the size of the descriptor sets used by select . Unfortunately, this normally does not work. The three descriptor sets are declared within the kernel and also uses the kernel’s definition of FD_SETSIZE as the upper limit. The only way to increase the size of the descriptor sets is to increase the value of FD_SETSIZE and then recompile the kernel. Changing the value without recompiling the kernel is inadequate.

Some vendors are changing their implementation of select to allow the process to define FD_SETSIZE to a larger value than the default. BSD/OS has changed the kernel implementation to allow larger descriptor sets, and it also provides four new FD_ xxx macros to dynamically allocate and manipulate these larger sets. From a portability standpoint, however, beware of using large descriptor sets.

The problem with earlier version of the str_cli (Section 5.5) was that we could be blocked in the call to fgets when something happened on the socket. We can now rewrite our str_cli function using select so that:

The client process is notified as soon as the server process terminates.

The client process blocks in a call to select waiting for either standard input or the socket to be readable.

The figure below shows the various conditions that are handled by our call to select :

Three conditions are handled with the socket:

If the peer TCP sends data, the socket becomes readable and read returns greater than 0 (the number of bytes of data). If the peer TCP sends a FIN (the peer process terminates), the socket becomes readable and read returns 0 (EOF). If the peer TCP sends an RST (the peer host has crashed and rebooted), the socket becomes readable, read returns –1, and errno contains the specific error code.

Below is the source code for this new version.

select/strcliselect01.c

#include “unp.h” void str_cli ( FILE * fp , int sockfd ) { int maxfdp1 ; fd_set rset ; char sendline [ MAXLINE ], recvline [ MAXLINE ]; FD_ZERO ( & rset ); for ( ; ; ) { FD_SET ( fileno ( fp ), & rset ); FD_SET ( sockfd , & rset ); maxfdp1 = max ( fileno ( fp ), sockfd ) + 1 ; Select ( maxfdp1 , & rset , NULL , NULL , NULL ); if ( FD_ISSET ( sockfd , & rset )) { /* socket is readable */ if ( Readline ( sockfd , recvline , MAXLINE ) == 0 ) err_quit ( “str_cli: server terminated prematurely” ); Fputs ( recvline , stdout ); } if ( FD_ISSET ( fileno ( fp ), & rset )) { /* input is readable */ if ( Fgets ( sendline , MAXLINE , fp ) == NULL ) return ; /* all done */ Writen ( sockfd , sendline , strlen ( sendline )); } } }

This code does the following:

Call select . We only need one descriptor set ( rset ) to check for readability. This set is initialized by FD_ZERO and then two bits are turned on using FD_SET : the bit corresponding to the standard I/O file pointer, fp , and the bit corresponding to the socket, sockfd . The function fileno converts a standard I/O file pointer into its corresponding descriptor, since select (and poll ) work only with descriptors. select is called after calculating the maximum of the two descriptors. In the call, the write-set pointer and the exception-set pointer are both null pointers. The final argument (the time limit) is also a null pointer since we want the call to block until something is ready.

. Handle readable socket . On return from select, if the socket is readable, the echoed line is read with readline and output by fputs .

. On return from select, if the socket is readable, the echoed line is read with and output by . Handle readable input. If the standard input is readable, a line is read by fgets and written to the socket using writen .

Instead of the function flow being driven by the call to fgets , it is now driven by the call to select .

Batch Input and Buffering¶

Unfortunately, our str_cli function is still not correct. Our original version in Section 5.5 operates in a stop-and-wait mode, which is fine for interactive use: It sends a line to the server and then waits for the reply. This amount of time is one RTT plus the server’s processing time (which is close to 0 for a simple echo server). We can therefore estimate how long it will take for a given number of lines to be echoed if we know the RTT between the client and server. We can use ping to measure RTTs.

If we consider the network between the client and server as a full-duplex pipe, with requests going from the client to the server and replies in the reverse direction, then the following figure shows our stop-and-wait mode:

Note that this figure:

Assumes that there is no server processing time and that the size of the request is the same as the reply

Shows show only the data packets, ignoring the TCP acknowledgments that are also going across the network

A request is sent by the client at time 0 and we assume an RTT of 8 units of time. The reply sent at time 4 is received at time 7.

This stop-and-wait mode is fine for interactive input. The problem is: if we run our client in a batch mode, when we redirect the input and output, however, the resulting output file is always smaller than the input file (and they should be identical for an echo server).

Batch mode *¶

To see what’s happening, realize that in a batch mode, we can keep sending requests as fast as the network can accept them. The server processes them and sends back the replies at the same rate. This leads to the full pipe at time 7, as shown below:

We assume:

After sending the first request, we immediately send another, and then another

We can keep sending requests as fast as the network can accept them, along with processing replies as fast as the network supplies them.

Assume that the input file contains only nine lines. The last line is sent at time 8, as shown in the above figure. But we cannot close the connection after writing this request because there are still other requests and replies in the pipe. The cause of the problem is our handling of an EOF on input: The function returns to the main function, which then terminates. But in a batch mode, an EOF on input does not imply that we have finished reading from the socket; there might still be requests on the way to the server, or replies on the way back from the server.

The solution is to close one-half of the TCP connection by sending a FIN to the server, telling it we have finished sending data, but leave the socket descriptor open for reading. This is done with the shutdown function, described in the next section.

Buffering concerns *¶

Buffering for performance as in str_cli (Section 6.7) adds complexity to a network application.

When several lines of input are available from the standard input. select will cause the code (select/strcliselect01.c#L24) to read the input using fgets , which will read the available lines into a buffer used by stdio. But, fgets only returns a single line and leaves any remaining data sitting in the stdio buffer. The following code (select/strcliselect01.c#L26) writes that single line to the server and then select is called again to wait for more work, even if there are additional lines to consume in the stdio buffer. The reason is that select knows nothing of the buffers used by stdio;it will only show readability from the viewpoint of the read system call, not calls like fgets . Thus, mixing stdio and select is considered very error-prone and should only be done with great care.

The same problem exists with readline in this example ( str_cli function). Instead of data being hidden from select in a stdio buffer, it is hidden in readline ‘s buffer. In Section 3.9 we provided a function (lib/readline.c#L52) that gives visibility into readline ‘s buffer, so one possible solution is to modify our code to use that function before calling select to see if data has already been read but not consumed. But again, the complexity grows out of hand quickly when we have to handle the case where the readline buffer contains a partial line (meaning we still need to read more) as well as when it contains one or more complete lines (which we can consume).

We will address these buffering concerns in the improved version of str_cli shown in Section 6.7.

shutdown Function¶

The normal way to terminate a network connection is to call the close function. But, there are two limitations with close that can be avoided with shutdown :

close decrements the descriptor’s reference count and closes the socket only if the count reaches 0 (Section 4.8). With shutdown , we can initiate TCP’s normal connection termination sequence (the four segments beginning with a FIN in Figure 2.5), regardless of the reference count. close terminates both directions of data transfer, reading and writing. Since a TCP connection is full-duplex, there are times when we want to tell the other end that we have finished sending, even though that end might have more data to send us. This is the scenario we encountered in the previous section with batch input to our str_cli function. The figure below shows the typical function calls in this scenario.

#include int shutdown ( int sockfd , int howto ); /* Returns: 0 if OK, –1 on error */

The action of the function depends on the value of the howto argument:

SHUT_RD : The read half of the connection is closed. No more data can be received on the socket and any data currently in the socket receive buffer is discarded. The process can no longer issue any of the read functions on the socket. Any data received after this call for a TCP socket is acknowledged and then silently discarded.

: No more data can be received on the socket and any data currently in the socket receive buffer is discarded. The process can no longer issue any of the read functions on the socket. Any data received after this call for a TCP socket is acknowledged and then silently discarded. SHUT_WR : The write half of the connection is closed. In the case of TCP, this is called a half-close . Any data currently in the socket send buffer will be sent, followed by TCP’s normal connection termination sequence. As we mentioned earlier, this closing of the write half is done regardless of whether or not the socket descriptor’s reference count is currently greater than 0. The process can no longer issue any of the write functions on the socket.

: In the case of TCP, this is called a . Any data currently in the socket send buffer will be sent, followed by TCP’s normal connection termination sequence. As we mentioned earlier, this closing of the write half is done regardless of whether or not the socket descriptor’s reference count is currently greater than 0. The process can no longer issue any of the write functions on the socket. SHUT_RDWR : The read half and the write half of the connection are both closed. This is equivalent to calling shutdown twice: first with SHUT_RD and then with SHUT_WR .

The three SHUT_ xxx names are defined by the POSIX specification. Typical values for the howto argument that you will encounter will be 0 (close the read half), 1 (close the write half), and 2 (close the read half and the write half).

The following code is our revised and correct version of the str_cli function that uses select and shutdown . In the function, select notifies us as soon as the server closes its end of the connection and shutdown lets us handle batch input correctly.

select/strcliselect02.c

#include “unp.h” void str_cli ( FILE * fp , int sockfd ) { int maxfdp1 , stdineof ; fd_set rset ; char buf [ MAXLINE ]; int n ; stdineof = 0 ; FD_ZERO ( & rset ); for ( ; ; ) { if ( stdineof == 0 ) FD_SET ( fileno ( fp ), & rset ); FD_SET ( sockfd , & rset ); maxfdp1 = max ( fileno ( fp ), sockfd ) + 1 ; Select ( maxfdp1 , & rset , NULL , NULL , NULL ); if ( FD_ISSET ( sockfd , & rset )) { /* socket is readable */ if ( ( n = Read ( sockfd , buf , MAXLINE )) == 0 ) { if ( stdineof == 1 ) return ; /* normal termination */ else err_quit ( “str_cli: server terminated prematurely” ); } Write ( fileno ( stdout ), buf , n ); } if ( FD_ISSET ( fileno ( fp ), & rset )) { /* input is readable */ if ( ( n = Read ( fileno ( fp ), buf , MAXLINE )) == 0 ) { stdineof = 1 ; Shutdown ( sockfd , SHUT_WR ); /* send FIN */ FD_CLR ( fileno ( fp ), & rset ); continue ; } Writen ( sockfd , buf , n ); } } }

stdineof is a new flag that is initialized to 0. As long as this flag is 0, each time around the main loop, we select on standard input for readability.

is a new flag that is initialized to 0. As long as this flag is 0, each time around the main loop, we on standard input for readability. Normal and premature termination. When we read the EOF on the socket, and: If we have already encountered an EOF on standard input, this is normal termination and the function returns. If we have not yet encountered an EOF on standard input, the server process has prematurely terminated. We now call read and write to operate on buffers instead of lines and allow select to work for us as expected.

When we read the EOF on the socket, and: shutdown . When we encounter the EOF on standard input, our new flag, stdineof , is set and we call shutdown with a second argument of SHUT_WR to send the FIN. Here we also use buffers instead of lines, using read and writen .

TCP Echo Server (Revisited)¶

We now rewrite the TCP echo server (Section 5.2 and 5.3 as a single process that uses select to handle any number of clients, instead of fork ing one child per client.

Before first client has established a connection *¶

Before the first client has established a connection, the server has a single listening descriptor.

The server maintains only a read descriptor set (rset), shown in the following figure. Assuming the server is started in the foreground, descriptors 0, 1, and 2 are set to standard input, output, and error, so the first available descriptor for the listening socket is 3.

We also show an array of integers named client that contains the connected socket descriptor for each client. All elements in this array are initialized to –1.

The only nonzero entry in the descriptor set is the entry for the listening sockets and the first argument to select will be 4.

After first client establishes connection *¶

When the first client establishes a connection with our server, the listening descriptor becomes readable and our server calls accept . The new connected descriptor returned by accept will be 4. The following figure shows this connection:

The server must remember the new connected socket in its client array, and the connected socket must be added to the descriptor set. The updated data structures are shown in the figure below:

After second client connection is established *¶

Sometime later a second client establishes a connection and we have the scenario shown below:

The new connected socket (which we assume is 5) must be remembered, giving the data structures shown below:

After first client terminates its connection *¶

Next, we assume the first client terminates its connection. The client TCP sends a FIN, which makes descriptor 4 in the server readable. When our server reads this connected socket, read returns 0. We then close this socket and update our data structures accordingly. The value of client[0] is set to –1 and descriptor 4 in the descriptor set is set to 0. This is shown in the figure below. Notice that the value of maxfd does not change.

Summary of TCP echo server (revisited) *¶

As clients arrive, we record their connected socket descriptor in the first available entry in the client array (the first entry with a value of –1) and also add the connected socket to the read descriptor set.

The variable maxi is the highest index in the client array that is currently in use and the variable maxfd (plus one) is the current value of the first argument to select.

is the highest index in the client array that is currently in use and the variable (plus one) is the current value of the first argument to select. The only limit on the number of clients that this server can handle is the minimum of the two values FD_SETSIZE and the maximum number of descriptors allowed for this process by the kernel (Section 6.3).

tcpcliserv/tcpservselect01.c

/* include fig01 */ #include “unp.h” int main ( int argc , char ** argv ) { int i , maxi , maxfd , listenfd , connfd , sockfd ; int nready , client [ FD_SETSIZE ]; ssize_t n ; fd_set rset , allset ; char buf [ MAXLINE ]; socklen_t clilen ; struct sockaddr_in cliaddr , servaddr ; listenfd = Socket ( AF_INET , SOCK_STREAM , 0 ); bzero ( & servaddr , sizeof ( servaddr )); servaddr . sin_family = AF_INET ; servaddr . sin_addr . s_addr = htonl ( INADDR_ANY ); servaddr . sin_port = htons ( SERV_PORT ); Bind ( listenfd , ( SA * ) & servaddr , sizeof ( servaddr )); Listen ( listenfd , LISTENQ ); maxfd = listenfd ; /* initialize */ maxi = – 1 ; /* index into client[] array */ for ( i = 0 ; i < FD_SETSIZE ; i ++ ) client [ i ] = - 1 ; /* -1 indicates available entry */ FD_ZERO ( & allset ); FD_SET ( listenfd , & allset ); /* end fig01 */ /* include fig02 */ for ( ; ; ) { rset = allset ; /* structure assignment */ nready = Select ( maxfd + 1 , & rset , NULL , NULL , NULL ); if ( FD_ISSET ( listenfd , & rset )) { /* new client connection */ clilen = sizeof ( cliaddr ); connfd = Accept ( listenfd , ( SA * ) & cliaddr , & clilen ); #ifdef NOTDEF printf ( "new client: %s, port %d " , Inet_ntop ( AF_INET , & cliaddr . sin_addr , 4 , NULL ), ntohs ( cliaddr . sin_port )); #endif for ( i = 0 ; i < FD_SETSIZE ; i ++ ) if ( client [ i ] < 0 ) { client [ i ] = connfd ; /* save descriptor */ break ; } if ( i == FD_SETSIZE ) err_quit ( "too many clients" ); FD_SET ( connfd , & allset ); /* add new descriptor to set */ if ( connfd > maxfd ) maxfd = connfd ; /* for select */ if ( i > maxi ) maxi = i ; /* max index in client[] array */ if ( — nready <= 0 ) continue ; /* no more readable descriptors */ } for ( i = 0 ; i <= maxi ; i ++ ) { /* check all clients for data */ if ( ( sockfd = client [ i ]) < 0 ) continue ; if ( FD_ISSET ( sockfd , & rset )) { if ( ( n = Read ( sockfd , buf , MAXLINE )) == 0 ) { /* connection closed by client */ Close ( sockfd ); FD_CLR ( sockfd , & allset ); client [ i ] = - 1 ; } else Writen ( sockfd , buf , n ); if ( -- nready <= 0 ) break ; /* no more readable descriptors */ } } } } /* end fig02 */ The code does the following: Create listening socket and initialize for select . We create the listening socket using socket , bind , and listen and initialize our data structures assuming that the only descriptor that we will select on initially is the listening socket. We create the listening socket using , , and and initialize our data structures assuming that the only descriptor that we will on initially is the listening socket. Block in select . select waits for something to happen, which is one of the following: The establishment of a new client connection. The arrival of data on the existing connection. A FIN on the existing connection. A RST on the existing connection. . waits for something to happen, which is one of the following: accept new connections . If the listening socket is readable, a new connection has been established. We call accept and update our data structures accordingly. We use the first unused entry in the client array to record the connected socket. The number of ready descriptors is decremented, and if it is 0 (tcpcliserv/tcpservselect01.c#L62), we can avoid the next for loop. This lets us use the return value from select to avoid checking descriptors that are not ready. . Check existing connections . In the second nested for loop, a test is made for each existing client connection as to whether or not its descriptor is in the descriptor set returned by select , and a line is read from the client and echoed back to the client. Otherwsie, if the client closes the connection, read returns 0 and we update our data structures accordingly. We never decrement the value of maxi , but we could check for this possibility each time a client closes its connection. . This server is more complicated than the earlier version (Section 5.2 and 5.3, but it avoids all the overhead of creating a new process for each client and it is a nice example of select . Nevertheless, in Section 16.6, we will describe a problem with this server that is easily fixed by making the listening socket nonblocking and then checking for, and ignoring, a few errors from accept . There is a problem with the server in the above example. If a malicious client connects to the server, sends one byte of data (other than a newline), and then goes to sleep. The server will call read , which will read the single byte of data from the client and then block in the next call to read , waiting for more data from this client. The server is then blocked ("hung") by this one client and will not service any other clients, until the malicious client either sends a newline or terminates. The basic concept here is that when a server is handling multiple clients, the server can never block in a function call related to a single client. Doing so can hang the server and deny service to all other clients. This is called a denial-of-service attack, which prevents the server from servicing other legitimate clients. Possible solutions are: Use nonblocking I/O (Chapter 16) Have each client serviced by a separate thread of control (either spawn a process or a thread to service each client) Place a timeout on the I/O operations pselect Function¶ The pselect function was invented by POSIX and is now supported by many of the Unix variants. #include #include #include int pselect ( int maxfdp1 , fd_set * readset , fd_set * writeset , fd_set * exceptset , const struct timespec * timeout , const sigset_t * sigmask ); /* Returns: count of ready descriptors, 0 on timeout, –1 on error */

pselect contains two changes from the normal select function:

pselect uses the timespec structure (another POSIX invention) instead of the timeval structure. The tv_nsec member of the newer structure specifies nanoseconds, whereas the tv_usec member of the older structure specifies microseconds. struct timespec { time_t tv_sec; /* seconds */ long tv_nsec; /* nanoseconds */ }; pselect adds a sixth argument: a pointer to a signal mask. This allows the program to disable the delivery of certain signals, test some global variables that are set by the handlers for these now-disabled signals, and then call pselect , telling it to reset the signal mask.

With regard to the second point, consider the following example (discussed on APUE). Our program’s signal handler for SIGINT just sets the global intr_flag and returns. If our process is blocked in a call to select , the return from the signal handler causes the function to return with errno set to EINTR . But when select is called, the code looks like the following:

if ( intr_flag ) handle_intr (); /* handle the signal */ /* signals occurring in here are lost */ if ( ( nready = select ( … )) < 0 ) { if ( errno == EINTR ) { if ( intr_flag ) handle_intr (); } ... } The problem is that between the test of intr_flag and the call to select , if the signal occurs, it will be lost if select blocks forever. With pselect , we can now code this example reliably as: sigset_t newmask , oldmask , zeromask ; sigemptyset ( & zeromask ); sigemptyset ( & newmask ); sigaddset ( & newmask , SIGINT ); sigprocmask ( SIG_BLOCK , & newmask , & oldmask ); /* block SIGINT */ if ( intr_flag ) handle_intr (); /* handle the signal */ if ( ( nready = pselect ( ... , & zeromask )) < 0 ) { if ( errno == EINTR ) { if ( intr_flag ) handle_intr (); } ... } Before testing the intr_flag variable, we block SIGINT . When pselect is called, it replaces the signal mask of the process with an empty set (i.e., zeromask ) and then checks the descriptors, possibly going to sleep . But when pselect returns, the signal mask of the process is reset to its value before pselect was called (i.e., SIGINT is blocked). poll Function¶ poll provides functionality that is similar to select , but poll provides additional information when dealing with STREAMS devices. #include int poll ( struct pollfd * fdarray , unsigned long nfds , int timeout ); /* Returns: count of ready descriptors, 0 on timeout, –1 on error */

Arguments:

The first argument (fdarray) is a pointer to the first element of an array of structures. Each element is a pollfd structure that specifies the conditions to be tested for a given descriptor, fd .

struct pollfd { int fd ; /* descriptor to check */ short events ; /* events of interest on fd */ short revents ; /* events that occurred on fd */ };

The conditions to be tested are specified by the events member, and the function returns the status for that descriptor in the corresponding revents member. This data structure (having two variables per descriptor, one a value and one a result) avoids value-result arguments (the middle three arguments for select are value-result). Each of these two members is composed of one or more bits that specify a certain condition. The following figure shows the constants used to specify the events flag and to test the revents flag against.

The first four constants deal with input, the next three deal with output, and the final three deal with errors. The final three cannot be set in events , but are always returned in revents when the corresponding condition exists.

With regard to TCP and UDP sockets, the following conditions cause poll to return the specified revent . Unfortunately, POSIX leaves many holes (optional ways to return the same condition) in its definition of poll .

All regular TCP data and all UDP data is considered normal.

TCP’s out-of-band data is considered priority band.

When the read half of a TCP connection is closed (e.g., a FIN is received), this is also considered normal data and a subsequent read operation will return 0.

The presence of an error for a TCP connection can be considered either normal data or an error ( POLLERR ). In either case, a subsequent read will return –1 with errno set to the appropriate value. This handles conditions such as the receipt of an RST or a timeout.

). In either case, a subsequent will return –1 with set to the appropriate value. This handles conditions such as the receipt of an RST or a timeout. The availability of a new connection on a listening socket can be considered either normal data or priority data. Most implementations consider this normal data.

The completion of a nonblocking connect is considered to make a socket writable.

The number of elements in the array of structures is specified by the nfds argument.

The timeout argument specifies how long the function is to wait before returning. A positive value specifies the number of milliseconds to wait. The constant INFTIM (wait forever) is defined to be a negative value.

Return values from poll :

–1 if an error occurred

0 if no descriptors are ready before the timer expires

Otherwise, it is the number of descriptors that have a nonzero revents member.

If we are no longer interested in a particular descriptor, we just set the fd member of the pollfd structure to a negative value. Then the events member is ignored and the revents member is set to 0 on return.

TCP Echo Server (Revisited Again)¶

This section is discusses the TCP echo server from Section 6.8 using poll instead of select .

In the select version we allocate a client array along with a descriptor set named rset (tcpcliserv/tcpservselect01.c). With poll , we must allocate an array of pollfd structures to maintain the client information instead of allocating another array. We handle the fd member of this array the same way we handled the client array in the selection version: a value of –1 means the entry is not in use; otherwise, it is the descriptor value. Any entry in the array of pollfd structures passed to poll with a negative value for the fd member is just ignored.

tcpcliserv/tcpservpoll01.c

/* include fig01 */ #include “unp.h” #include /* for OPEN_MAX */ int main ( int argc , char ** argv ) { int i , maxi , listenfd , connfd , sockfd ; int nready ; ssize_t n ; char buf [ MAXLINE ]; socklen_t clilen ; struct pollfd client [ OPEN_MAX ]; struct sockaddr_in cliaddr , servaddr ; listenfd = Socket ( AF_INET , SOCK_STREAM , 0 ); bzero ( & servaddr , sizeof ( servaddr )); servaddr . sin_family = AF_INET ; servaddr . sin_addr . s_addr = htonl ( INADDR_ANY ); servaddr . sin_port = htons ( SERV_PORT ); Bind ( listenfd , ( SA * ) & servaddr , sizeof ( servaddr )); Listen ( listenfd , LISTENQ ); client [ 0 ]. fd = listenfd ; client [ 0 ]. events = POLLRDNORM ; for ( i = 1 ; i < OPEN_MAX ; i ++ ) client [ i ]. fd = - 1 ; /* -1 indicates available entry */ maxi = 0 ; /* max index into client[] array */ /* end fig01 */ /* include fig02 */ for ( ; ; ) { nready = Poll ( client , maxi + 1 , INFTIM ); if ( client [ 0 ]. revents & POLLRDNORM ) { /* new client connection */ clilen = sizeof ( cliaddr ); connfd = Accept ( listenfd , ( SA * ) & cliaddr , & clilen ); #ifdef NOTDEF printf ( "new client: %s " , Sock_ntop (( SA * ) & cliaddr , clilen )); #endif for ( i = 1 ; i < OPEN_MAX ; i ++ ) if ( client [ i ]. fd < 0 ) { client [ i ]. fd = connfd ; /* save descriptor */ break ; } if ( i == OPEN_MAX ) err_quit ( "too many clients" ); client [ i ]. events = POLLRDNORM ; if ( i > maxi ) maxi = i ; /* max index in client[] array */ if ( — nready <= 0 ) continue ; /* no more readable descriptors */ } for ( i = 1 ; i <= maxi ; i ++ ) { /* check all clients for data */ if ( ( sockfd = client [ i ]. fd ) < 0 ) continue ; if ( client [ i ]. revents & ( POLLRDNORM | POLLERR )) { if ( ( n = read ( sockfd , buf , MAXLINE )) < 0 ) { if ( errno == ECONNRESET ) { /* connection reset by client */ #ifdef NOTDEF printf ( "client[%d] aborted connection " , i ); #endif Close ( sockfd ); client [ i ]. fd = - 1 ; } else err_sys ( "read error" ); } else if ( n == 0 ) { /* connection closed by client */ #ifdef NOTDEF printf ( "client[%d] closed connection " , i ); #endif Close ( sockfd ); client [ i ]. fd = - 1 ; } else Writen ( sockfd , buf , n ); if ( -- nready <= 0 ) break ; /* no more readable descriptors */ } } } } /* end fig02 */ This code does the following:

I/O Multiplexing : the Select and Poll Functions

I / O MULTIPLEXING : THE Select AND Poll FUNCTIONS

It is seen that the TCP client is handling two inputs at the same time: standard input and a TCP socket. It was found that when the client was blocked in a call to read(by calling readline function), and the server process was killed. The server TCP correctly, correctly sends a FIN to the client TCP, but since the client process is blocked reading from the standard input, it never sees the end – of file until it reads from the socket . What we need is the capability to tell the kernel that we want to be notified if one or more I/O conditions are ready (i.e. input is ready to be read, or the descriptors is capable of taking more outputs). This capability is called I /O Multiplexing and is provided by the select and poll functions . There is one more Posix .1g variations called pselect.

I /O multiplexing is typically is used in networking applications in the following scenarios:

• When a client is handling multiple descriptors ( normally interactive input and a network socket), I/O multiplexing should be used. This is the scenario that was described in the previous paragraph.

• It is possible, but rare, ofr a client to handle multiple sockets at the same time. We show an example of this using select in the context of web client

• If a TCP server ha n dles both a listening socket and its connected sockets, I / O multiplexing is normally used.

• IF a server handles both TCP and UDP, I/O multiplexing is normally used.

• If a server handles multiple services and perhaps multiple protocols, I/O multiplexing us normally used.

It is not restricted only to networking programme, it may be used in any nontrivial application as well.

I/O Models:

There are five I /O models in the Unix. These are:

a. Blocking I /O

b. Non blocking I / O

c. I/O Multiplexing (select and poll)

d. Signal driven I/O (SIGIO)

e. Asynchronous I/O (the Posix 1 aio_functions)

There are two distinct phases for an input operation.:

a. waiting for the data to be read and

b. copying the data from the kernel to the process.

Fo ran input operation on a socket the first step normally involves waiting for the data to arrive on the network. When the packet arrives, it is copied into buffer within the kernel. The second step is copying this data from the kernel‘s buffer into our applications buffer.

Blocking I/O Model :

The most prevalent model for I/O is the blocking I/O model, which we have used for all our examples so far in the text.. BY default, all sockets are blocking. Using a datagram socket for our examples we have the scenario as shown below. In UDP the concept of data being ready to be read is simple because either an entire datagram packet is received or not.

IN this example recvfrom as a system call as it differentiated between our application and the kernel.

The process calls recvfrom and the system call does not return until the datagram arrives and is copied into our application buffer, or an error occurs. The most common error is the system call being interrupted by a signal. We say that our process is blocked the entire time from when it call recvfrom until it returns. When recvfrom returns OK, our application processes the datagram.

Non Blocking I/O Model :

When the socket is set to non blocking, the kernel is told that ―when I/O operation that I request cannot be completed without putting the process to sleep, do not put the process to sleep but return an error message instead.‖ The following figure gives the details.

During the first three times, when the recvfrom is called, there is no data to return, so the kernel immediately returns an error EWOULDBLOCK. Fourth time, when recvfrom is called, the datagram is ready, it is copied into our application buffer and the recvfrom returns OK. The application then process the data.

When the application puts the call recvfrom in a loop, on a non blocking descriptors like this, it is called polling. The continuation polling of the kernel is waste of CPU time. But this model is normally encountered on system that are dedicated to one function.

So you have finished reading the io multiplexing topic article, if you find this article useful, please share it. Thank you very much. See more: I O multiplexing epoll, io multiplexing in c, redis io multiplexing, what is the advantage of using select() system call in i/o multiplexing model? *, What is multiplexing, Signal driven I o model, File I/O, i/o multiplexing is used in networking applications in which scenarios? mcq

Leave a Comment