Just think, Just do it

Write Scalable Winsock Apps Using Completion Ports
     摘要: 异步IO、APC、IO完成端口、线程池与高性能服务器kraft @ 2005-11-13 18:37原作者姓名 Fang(fangguicheng@21cn.com)异步IO、APC、IO完成端口、线程池与高性能服务器之一 异步IO背景:轮询 PIO DMA 中断    早期IO设备的速度与CPU相比,还不是太悬殊。CPU定时轮询一遍IO设备,看看有无处理要...  阅读全文
posted @ 2005-12-05 15:08 zfly 阅读(3215) | 评论 (0) | 编辑 收藏
 
Apache 服务器端程序Win32下的编写??
有谁研究过apache的WinNT下的服务器端程序的编写?
1 parent process
1 child   process + n threads
通过Win32完全端口进行控制I/O request 的并发模式。

有一些疑点好几天没看懂,能否交流交流?

---------------------------------------------------------------------------------
2005.11.30
看了好多东东,win32下的调试技术,还没过关!!
望破解高手能互相交流一下,进程间的调试技术。

父进程对本身创建子进程,不像linux 中的fork(),调试起来有点麻烦,总是不能进入到
子进程中的断点,经过昨天+今天的痛定思痛总于能debug 了。

posted @ 2005-11-29 14:09 zfly 阅读(276) | 评论 (0) | 编辑 收藏
 
总于回来了!

又可以安心的阅读自己想看的东西了!!
-------------------------------------------------------------------------------------


日子一天天过去,杂念、俗事太多,NBA,电视剧,上网看新闻,体育新闻,陪老婆上街......,花去太多时间,越来越想起一句话:

要想成功,得耐得住寂寞!
只有偏执狂才能成功!
posted @ 2005-11-28 14:53 zfly 阅读(162) | 评论 (0) | 编辑 收藏
 
发布绝对有用资料 (电力规约 DNP 3.0 图解) !
以前求之不得!
现在无偿发布!!!!!!!!
http://www.cnitblog.com/Files/zfly/dnp.rar
posted @ 2005-10-31 14:00 zfly 阅读(1610) | 评论 (2) | 编辑 收藏
 
内存分配策略及计算

内存分配策略及计算
1、General-purpose allocation can provide any size of memory block that a caller might request(the request size, or block size). General-purpose allocation is very flexible, but has several drawbacks, two of which are: a) performance, because it has to do more work; and b) fragmentation, because as blocks are allocated and freed we can end up with lots of little noncontiguous areas of
  unallocated memory.(通用分配策略)
 
2、Fixed-size allocation always returns a block of the same fixed size. This is obviously less flexible than general-purpose allocation, but it can be done much faster and doesn't result in the same kind of fragmentation.(固定尺寸分配策略)
  
In practice, you'll often see combinations of the above. For example, perhaps your memory manager uses a general-purpose allocation scheme for all requests over some size S, and as an optimization provides a fixed-size allocation scheme for all requests up to size S. It's usually unwieldy to have a separate arena for requests of size 1, another for requests of size 2, and so on; what normally happens is that the manager has a separate arena for requests of multiples of a certain size, say 16 bytes. If you request 16 bytes, great, you only use 16 bytes; if you request 17 bytes, the request is allocated from the 32-byte arena, and 15 bytes are wasted. This is a source of possible overhead, but more about that in a moment.

Who selects the memory management strategy? There are several possible layers of memory manager involved, each of which may override the previous (lower-level) one:(内存分配策略的选择)

1.The operating system kernel provides the most basic memory allocation services. This underlying  allocation strategy, and its characteristics, can vary from one operating systems platform to another,   and this level is the most likely to be affected by hardware considerations.
 
2.The compiler's default runtime library builds its allocation services, such as C++'s operator new  and C's malloc, upon the native allocation services. The compiler's services might just be a thin   wrapper around the native ones and inherit their characteristics, or the compiler's services might  override the native strategies by buying larger chunks from the native services and then parceling those out according to their own methods.
 
3.The standard containers and allocators in turn use the compiler's services, and possibly further override them to implement their own strategies and optimizations.
 
4.Finally, user-defined containers and/or user-defined allocators can further reuse any of the lower-level  services (for example, they may want to directly access native services if portability doesn't matter)  and do pretty much whatever they please.
         包容关系
              ------------------------------------------------+
              |  user-defined containters and/or allocators   |
              |    +------------------------------------------+
              |    | stardard containters  and/or allocators  |
              |  +--------------------------------------------|
              |  |                                            |
              |  |      operator new and allocator            |
              |------------------------------------------------
              |            操作系统核心                       | 
              ------------------------------------------------|
When you ask for n bytes of memory using new or malloc, you actually use up at least n bytes of memory because typically the memory manager must add some overhead to your request. Two common considerations that affect this overhead are:

1. Housekeeping overhead.
In a general-purpose (i.e., not fixed-size) allocation scheme, the memory manager will have to somehow remember how big each block is so that it later knows how much memory to release when you call delete or free. Typically the manager remembers the block size by storing that value at the beginning of the actual block it allocates, and then giving you a pointer to "your" memory that's offset past the housekeeping information. (See Figure 2.) Of course, this means it has to allocate extra space for that value, which could be a number as big as the largest possible valid allocation and so is typically the same size as a pointer.
When freeing the block, the memory manager will just take the pointer you give it, subtract the number of housekeeping bytes and read the size, then perform the deallocation.
                           size          +    n bytes    用于通用分配策略
                     (housrkeeping info)    (我申请的)
                    
2. Chunk size overhead.
Even when you don't need to store extra information, a memory manager will often reserve more bytes than you asked for because memory is often allocated in certain-sized chunks.

For one thing, some platforms require certain types of data to appear on certain byte boundaries (e.g., some require pointers to be stored on 4-byte boundaries) and either break or perform more slowly if they're not. This is called alignment, and it calls for extra padding within, and possibly at the end of, the object's data. Even plain old built-in C-style arrays are affected by this need for alignment because it contributes to sizeof(struct).固定分配大小,须填充以补齐字节

For example:

// Example 1: Assume sizeof(long) == 4 and longs have a 4-byte
//            alignment requirement.(32位操作系统,不同的操作系统有不同)
struct X1
{
  char c1; // at offset 0, 1 byte
           // bytes 1-3: 3 padding bytes
  long l;  // bytes 4-7: 4 bytes, aligned on 4-byte boundary
  char c2; // byte 8: 1 byte
           // bytes 9-11: 3 padding bytes (see narrative)
}; // sizeof(X1) == 12
n == 1 + 3 + 4 + 1 == 9, and m == sizeof(X1) == 12

struct X2
{
  long l;  // bytes 0-3
  char c1; // byte 4
  char c2; // byte 5
           // bytes 6-7: 2 padding bytes
}; // sizeof(X2) == 8
 
容器的内存空间分配
Each standard container uses a different underlying memory structure and therefore imposes different overhead per contained object:
1、vector<T> internally stores a contiguous C-style array of T objects, and so has no extra per-element    overhead at all (besides padding for alignment, of course; note that here "contiguous" has the same meaning as it does for C-style arrays, as shown in Figure 3).
    
2、deque<T> can be thought of as a vector<T> whose internal storage is broken up into chunks.  A deque<T> stores chunks, or "pages," of objects; the actual page size isn't specified by the standard,    and depends mainly on how big T objects are and on the size choices made by your standard library implementer.   This paging requires the deque to store one extra pointer of management information per page, which usually    works out to a mere fraction of a bit per contained object; for example, on a system with 8-bit bytes and
   4-byte ints and pointers, a deque<int> with a 4K page size incurs an overhead per int of 0.03125 bits -    just 1/32 of a bit. There's no other per-element overhead because deque<T> doesn't store any extra pointers    or other information for individual T objects. There is no requirement that a deque's pages be C-style arrays,   but that's the usual implementation.
 
3、list<T> is a doubly-linked list of nodes that hold T elements. This means that for each T element,    list<T> also stores two pointers, which point to the previous and next nodes in the list. Every time we   insert a new T element, we also create two more pointers, so a list<T> requires at least two pointers'    worth of overhead per element.
 
4、set<T> (and, for that matter, a multiset<T>, map<Key,T>, or multimap<Key,T>) also stores nodes that hold    T (or pair<const Key,T>) elements. The usual implementation of a set is as a tree with three extra    pointers per node. Often people see this and think, 'why three pointers? isn't two enough, one for the    left child and one for the right child?' The reason three are needed is that we also need an "up" pointer    to the parent node, otherwise determining the 'next' element starting from some arbitrary iterator can't
    be done efficiently enough. (Besides trees, other internal implementations of set are possible; for     example, an alternating skip list can be used, which still requires at least three pointers per element     in the set.)
 
     Container      Typical housekeeping data overhead per contained object
     ----------    ---------------------------------------------------------
      vector         No overhead per T.
      deque          Nearly no overhead per T — typically just a fraction of a bit.
      list           Two pointers per T.
      set, multiset  Three pointers per T.
      map, multimap  Three pointers per pair<const Key, T>.
 
 多分配的结构:
     vector      None; objects are not allocated individually. (See sidebar.)
     deque       None; objects are allocated in pages, and nearly always each
                 page will store many objects.
      list           | set, multiset              |    map, multimap                                      
      struct LNode { | struct SNode {             |     struct MNode {                   
        LNode* prev; |  SNode* prev;              |      MNode* prev;                    
        LNode* next; |  SNode* next;              |      MNode* next;                    
        T object;    |  SNode* parent;            |      MNode* parent;                  
      };             |   T object;                |      std::pair<const Key, T> data;   
                     |    }; // or equivalent     |       }; // or equivalent
  在如下条件:
 1、Pointers and ints are 4 bytes long. (Typical for 32-bit platforms.)
 2、sizeof(string) is 16. Note that this is just the size of the immediate string object
    and ignores any data buffers the string may itself allocate; the number and size of
    string's internal buffers will vary from implementation to implementation, but doesn't
    affect the comparative results below. (This sizeof(string) is the actual value of one
     popular implementation.)
 3、The default memory allocation strategy is to use fixed-size allocation where the block
    sizes are multiples of 16 bytes. (Typical for Microsoft Visual C++.)
   
        Container                       Basic node    Actual size of allocation block for node,
                                        data size     including internal node data alignment
                                                      and block allocation overhead
        ----------------------   ------------------   -----------------------------------------
        list<char>                     9 bytes         16 bytes        
        set<char>, multiset<char>      13 bytes        16 bytes        
        list<int>                      12 bytes        16 bytes        
        set<int>, multiset<int>        16 bytes        16 bytes        
        list<string>                   24 bytes        32 bytes
        set<string>, multiset<string>  28 bytes        32 bytes
       -----------------------------------------------------------------------------------------
       条件: Same actual overhead per contained object                  
        (implementation-dependent assumptions: sizeof(string) == 16,
        4-byte pointers and ints, and 16-byte fixed-size allocation blocks)                                                                                                                               
    
 内存算法:
 best-first:
  - The first block in the heap that is big enough is allocated
  - Each free block keeps a pointer to the next free block
  - These pointers may be adjusted as memory is allocated
   
    We can keep track of free space in a separate list called the free list
   
 nearest-fit
 worst-fit
 next-fit ......
 
 memory leak: If a dynamically allocated variable leaves its scope before being
               recycled, the memory cannot be recycled and the program will
               gradually drain away memory until the computer halts.
 dangling pointer: The invalid reference in this kind of situation is known as a
                   dangling pointer.    
    |-----------|------------------------------------|------------------------------|
    |           | Manual Memory                      |   Automatic Memory           |
    |           | Management                         |    Management                |
    |-----------|------------------------------------|------------------------------|
    | Benefits  |   size (smaller)                   |  constrains complexity       |
    |           |   speed (faster)                   |                              |
    |           |   control (you decide when to free)|                              |
    |-----------|------------------------------------|------------------------------|        
    | Costs     |    complexity                      |larger total memory footprint |    
    |           |    memory leaks                    |“comparable” performance      |     
    |           |    dangling pointers               |                              |
    |-----------|------------------------------------|------------------------------|
  If you rewrite all of the functions in a program              
  as inline macros, you can increase the speed at which the  
  program executes. This is because the processor doesn’t have to  
  waste time jumping around to different memory locations. In addition,   
  because execution will not frequently jump to a nonlocal spot           
  in memory, the processor will be able to spend much of its time executing
  code in the cache.                                                      
 
  Likewise, you can make a program smaller by isolating every bit     
  of redundant code and placing it in its own function. While this will
  decrease the total number of machine instructions, this tactic will 
  make a program slower because not only does the processor spend     
  most of its time jumping around memory, but the processor’s cache  
  will also need to be constantly refreshed.                          
 
  memory alloc:
  1. Bitmapped Allocation
     利用bitmap来记录内存分配的占位符,利用BST数来记录分配内存的大小
  2. Sequential Fit
     The sequential fit technique organizes memory into a linear linked    
     list of free and reserved regions . When an allocation request occurs,
     the memory manager moves sequentially through the list until it finds
     a free block of memory that can service/fit the request (hence the name “sequential fit”).    
     分配和回收、归并      
     其中选择空闲节点的匹配可选算法
     [ best-fit
       nearest-fit                       
       worst-fit          
       next-fit ......          
      ]
  3.Segregated Lists
     
   Garbage Collection的种类
          
   Reference counting collectors: identify garbage by maintaining a    
   running tally of the number of pointers that reference each block of
   allocated memory. When the number of references to a particular    
   block of memory reaches zero, the memory is viewed as garbage      
   and reclaimed. There are a number of types of reference counting  
   algorithms, each one implementing its own variation of the counting
   mechanism (i.e., simple reference counting, deferred reference    
   counting, 1-bit reference counting, etc.).    
   
    Follows these rules:
    - When an object is created, create a reference count (RC) field
    - Set the reference count to 1 upon creation, to account for   
      the object which initially points to this object             
    - When an object with pointers is created, increment the RC of 
      all of the objects pointed to by this object by 1            
    - When an object with pointers is destroyed (goes out of scope, etc.)
      decrement the RC of all objects pointed to by this object by 1
    - When a pointer is modified, decrement the RC of the old target
      by 1 and increment the RC of the new target by 1             
    - When an object’s RC reaches 0, reclaim the object as free spa
    
   Tracing garbage collectors traverse the application run-time environment
   (i.e., registers, stack, heap, data section) in search of              
   pointers to memory in the heap. Think of tracing collectors as         
   pointer hunter-gatherers. If a pointer is found somewhere in the       
   run-time environment, the heap memory that is pointed to is            
   assumed to be “alive” and is not recycled. Otherwise, the allocated  
   memory is reclaimed. There are several subspecies of tracing garbage   
   collectors, including mark-sweep, mark-compact, and copying            
   garbage collectors.                                                    
  
suballocator:they are talking about a specialpurpose                           
             application component that is implemented by the programmer    
             and based on existing services provided by application libraries
             (like malloc() and free()).

Suballocators are user code which reserve the system-provided
heap and then manage that memory itself      

posted @ 2005-10-31 10:38 zfly 阅读(824) | 评论 (0) | 编辑 收藏
 
守护程序的编写

STARTUPINFO
The STARTUPINFO structure is used with the CreateProcess function to specify main
window properties if a new window is created for the new process. For graphical
user interface (GUI) processes, this information affects the first window created
by the CreateWindow function and shown by the ShowWindow function. For console
processes, this information affects the console window if a new console is created
for the process.

DuplicateHandle
The DuplicateHandle function duplicates an object handle. The duplicate handle refers
to the same object as the original handle. Therefore, any changes to the object are
reflected through both handles.DuplicateHandle can be called by either the source
process or the target process. It can also be invoked where the source and target
process are the same. Note that DuplicateHandle should not be used to duplicate
handles to I/O completion ports.

监护程序的编写
  win9X -> 编写console 程序
           createpipe()
           DuplicateHandle()
           CreateProcess()获得console的输入,输出handle

   windoww2000 xp
           OpenSCManager()
           OpenService()
           ControlService()
           QueryServiceStatus()
           StartService()
          
守护程序的编写:对长时间远行程序的监控,非正常退出时,重新运行服务。
        
       

posted @ 2005-10-24 13:16 zfly 阅读(394) | 评论 (0) | 编辑 收藏
 
学习UDP 上实现 TCP的部分功能!!

UDP层上实现TCP的可靠数据传输,在游戏等领域实时要求比较高的情况下使用。

posted @ 2005-10-19 09:36 zfly 阅读(412) | 评论 (1) | 编辑 收藏
 
神6 与 TCP/IP !!

  神六发射成功,兴奋之余,凑巧最近在看tcp/ip,想想神舟是如何与地面通讯的呢?
    查阅相关资料推知(相关资料摘抄+推想):
    地面的网络通过卫星与神舟相连,卫星上有网关协议,神舟上也相应的通讯设备。其中卫星IP的网路
    有不同于地面的情况。
    1、信道差错率大(BER),空间环境出现突发错误几率大,信噪比大,信道丢包率高,而TCP是是一个使用
       分组丢失来控制传输行为的丢失敏感协议,它无法区分拥塞丢失和链路恶化丢失。较大的BER过
       早地触发了窗口减小机制,虽然这时网络并没有拥塞。此外,ACK分组的丢失使吞吐量进一步恶化。
    2、传播延迟
  影响卫星网络延迟的因素有一些,主要的一个是轨道类型。多数情况下低轨系统单向传播延迟是20一25ms,
    中轨系统是110-130 ms,静止轨道系统为250-280ms
   
    [[通信卫星一般位于赤道上空36000km的同步轨道。
    信号从一个地面站到另一地面站需要 239.6ms,往返路径时延(RTT)为558ms,信号传输的RTT是信号
    发出并得到相应应答的时延,并不只是卫星传输带来的,还包括其他因素,如网络中其他路经时延、
    在网关排队等待时间等。如果路径中包括多个卫星信道,时延就会更长。由于卫星信道反馈回路时延长,
     TCP发端需要很长时间来确认数据包是否被正确接收.]]
    
     系统延迟还受星间路由选择、星上处理以及缓存等
    因素的影响。一般而言,延迟对TCP的影响体现在:它降低了TCP对分组丢失的响应,特别对于仅想临界发
    送超过缺省启动窗口大小(仅超过一个TCP数据段)的连接更是如此。此时用户必须在慢启动状态下,
    在第一个ACK分组收到前,等待一个完全的往返延迟;卫星延迟和不断增加的信道速度(10Mbit/S或更高)
    还要求有效的缓存;增加的延迟偏差(variance)反过来也会通过在估算中加入噪声影响TCP 定时器机制,
    这一偏差会过早产生超时或重传,出现不正常的窗口大小,降低了总的带宽效率。简单地增加TCP定时器粒度
    (tranularity)在此没有多大帮助,因为尽管较大的值可以降低错误超时,但带宽利用不足也将因较长的延迟而增加。
   
    3)信道不对称
  许多卫星系统在前向和反向数据信道间有较大的带宽不对称性,采用速度较慢的反向信道可使接收机设计更经济且
    节省了宝贵的卫星带宽。考虑到大量TCP传输的较大单方向性特性(如从 Web服务器到远端主机),慢速反向信道在
    一定程度上是可以接受的。但非对称配置对TCP仍有显著的影响。例如,由于ACK分组会丢失或在较大数据分组后排
    队,较慢的反向信道会引起像ACK丢失和压缩(compression)的有害影响,从而大大减小吞吐量,有资料显示吞吐
    量随不对称的增加呈指数减小。此外,前向和反向信道速率的较大不对称会由于线速率突发错误较大而明显加重前
    向缓存拥塞。
   
    TCP使用基于滑动窗口的流量和拥塞控制方式,通过确认分组流实施控制(接收方窗口通知)。
    TCP使用基于往返定时器(RTh:round-trip timer)的自适应时钟来调谐重发超时。
    TCP 为完成对数据的确认使用了滑动窗口机制,为避免拥塞采用了称为“慢启动”的策略。
   
    发方对丢失或损坏数据的重发,要求保留数据副本直至收到数据确认(ACK)。
    为避免大量可能丢失的数据副本占用大量存储器并浪费带宽,TCP采用了一个滑动窗口装置来限制传送中的数据数量。
    随着确认的返回,TCP在前移窗口的同时,发送不断增加的数据。一旦窗口被占满,发方必须停止传输数据直至更多
    的确认到达。
   
    基本原理是:滑动窗口内含有一组顺序排列的报文序号,在发送端,窗口内报文序号对应的报文可以
    连续发送。这些报文包括已发送但未得到确认、未发送但可连续发送和已发送且已得到确认三种。由于本窗口中前
    面尚有未确认的报文,一旦窗口前面报文得到确认,窗口向前滑动相应位,落入窗口的后续报文又可连续发送。在
    接收端,窗口内的序号对应于容许接收帧。窗口前的帧是已收到且已发回确认的帧,不容许接收;窗口后的帧要等
    待窗口滑动后,才能接收。(我见过最好的描述)
    为了使流控有效、信道效率提高和避免拥塞,TCP采用慢启动、拥塞规避、快速启动和快
    速恢复四种拥塞控制机制,通过调整窗口尺寸来控制流量,避免拥塞,并充分利用信道。信源机用拥塞窗口(cwnd)
    和慢启动门限(ssthresh)两变量来控制流量。cwnd受信宿机通告窗口的限制,也是发送窗口的最大极限。cwnd的
    增减根据网络中现有的拥塞状况而定。当 cwnd<ssthresh时,通过慢启动算法增加cwnd;当cwnh≥ssthresh时,
    则使用拥塞规避的算法。ssthresh初始化为信宿机通告窗口,检测到拥塞后,才设置ssthresh值。
   
    慢启动和拥塞规避 在建立一条新链接时,为了避免拥塞时使用慢启动算法,cwnd初始化为1,ssthresh 为信宿机通告窗口。 这样,强迫TCP每发一个数据段就等待相应的确认(ACK),随着每收到一个确认,cwnd加此一直迟续到。wnd≥ssthresh减者 检测到丢包现象。当cwnd≥ ssthresh时,用拥塞规避算法增加cwnd。在拥塞规避中,cwnd增长得非常缓慢,每收到一个ACK,   cwnd只增加1/cwnd。假设每发一段即收到一个ACK,cwnd在一个往返时间内增加一段。

    虽然TCP能发现数据没有送达,但重新发送会进一步加剧信道的拥塞,从而进一步导致数据丢失。为避免网络因拥塞
    而瘫痪,TCP只能降低传输速率以对数据丢失做出反应。但是从算法上讲, TCP每次进行新的连接都必须从最低的传
    输速率启动,TCP用返回的ACK来指示提高速率,这是一个较慢的线性增加的过程。这就是所说的“慢启动”,即发送
    窗口依每次往返时间递增,以发现可持续的吞吐量。
   
    卫星通信网是一个噪声大、带宽、高延迟产物(BDP:bandwidth-delay product)网络(如较大延迟、较高比特差错率和
    带宽不对称等)。
   
    利用tcp/ip需要改进的地方:

    1.链路层改进
      采用:前向纠错(EEC)方案和自动重传(ARQ)协议是两个主要的差错控制方法。
           自动重传协议包括停止一等待、返回N和选择重传等三种类型。自动重传协议由于额外的重传延迟不适合较高的BER环境。
    2.TCP改进
      对于卫星TCP/IP数据传输,由于延迟时间过长,通常的TCP,滑动窗口大小限制了卫星链路的最高吞吐量;同样,
      由于ACK从卫星返回得十分缓慢,TCP达到全速时需要一个较长的提速时间,即使对于一个较小的数据连接也是如此。
    许多调整的参数可用于增强TCP的性能,包括数据段、定时器和窗口的大小。TCP实现中含有大量拥塞避免算法,
      如俊启动、选择重传和选择确认,它通常能改进像Internet这样的共享网络的性能。但在许多拥塞控制算法,
      特别是慢启动中,当中等数量数据正在一个具有较大带宽延迟特性的链路上传输时,会产生端到端通信的低效带宽利用问题。
      
    (1)基本TCP改进
     由于卫星通道的时延长,传统的TCP滑动窗口协议及拥塞算法将导致信道利用率极低。
     例如,连接建立开始时发一段,并等相应的ACK,至少需要500ms,慢启动所用时间严重超过地面线路情况.
     使用较长的段可以提高TCP性能,但TCP的一个问题是它的缺省窗口大小仅限于16bit,由于要求的窗口大小
     很容易超出最大允许的65 536字节, 这限制了最大吞吐量接近1Mbit/S(低于T1 速率)。窗口扩缩
     (window scaling)选项解决了这一问题,它允许启动时的连接协商一个比例因子,这个因子通常是2的幂,
     最大允许窗口达到32 bit,这对于卫星网络是足够了。然而增大的窗口也会引起序列号回绕的问题,
     要求附加回绕保护序列号(PAWS)机制。
    
     回应选项对于TCP卫星网络很重要.
    
     (2)快速重发和快速恢复 如果信源机在一给定时间内(重传超时RTO)没有收到确认,该段将重传。
       RTO是根据RTT确定的。另外,超时发生后, TCP将认为网络拥塞,置ssthresh=1/2cwnd, cwnd=1,
       慢启动开始,直到cwnd等于原1/2cwnd,然后再使用拥塞规避算法,以检测网络中的剩余容量。
       TCP总是给序号最高的段确认,也就是X段的ACK表示≤X的段的确认。另外,若收编的段序号不按顺序,
       ACK将应答给按顺序中的最高序号段。例如11段丢失,12段收到,则信宿机再发~个10段的确认。
       快速重发就是利用重复确认来检测丢失数据段,若收到 3次重复,TCP就认为这个段确实丢失,
       不等RTO到时,就重发丢失段。快速重发后,通过快速恢复算法调整拥塞窗口:先置ssthresh=1/2cwnd,
       直到cwnd等于原1/2cwnd,最后每收到一个确认,cwnd+1,一旦cwnd允许,TCP就发送新的数据。
       因此检测到丢失后,TCP将以一半的速率传输数据。一般来说,每个窗口内丢失一个数据段,可使用快速重发。
       若丢失多个数据段,则要等RTO超时再进行重发,重发后将进入侵启动状态。 TCP对于拥塞的处理方式根据拥
       塞检测的方式而定。上述四种拥塞控制算法都需要花费一定的时间来验证网络的传输能力,这必将引起带宽
       的浪费,特别是在时延长的卫星链路中。为了避免拥塞崩溃,权衡整个网络的利弊,如何使用好这四种方法十分关键。
    
     (3)较大的窗口尺寸 TCP的吞吐量受下列限制:吞吐量=窗口尺寸/往返时延;当最大窗口尺寸为65 535 byte时,
       吞吐量=65 535/560ms=117 027byte/s。因此,即使在T1的卫星链路中(≈192kbyte/s),使用最大的窗口尺
       寸也将造成信道的浪费,同时还要调整缓冲区的尺寸。
    
     (4)选择性确认改进
     这类协议称为选择性确认(TCP SACK),对TCP协议提出了明显的改进。TCP SACK是一个数据发现算法,
     其中接收方能够有选择地示意哪个数据块(数据段)没有收到。这允许接收方仅精确地重传这些遗漏的分组,
     从而有效地降低了不必要的重传。
    
     研究结果显示TCP SACK适合于具有中等丢失率(低于窗口大小的50%)的长延迟网络环境,对于线路丢失率
     较严重的网络,在SACK基础上改进的前向ACK FACK)建议比较适合。
    
     [[ SACK 當同一個window內漏失多個封包時,因為使用cumulative ack的關係,每個round trip time只會知
       道一個封包漏失,所以原始的TCP可能會有極差的效能。Sack Option header可記錄幾個區域,分別代表尚
       末收到的封包號碼,可以明確地通知傳送方哪幾個封包沒有收到。]]

     [[ FACK SACK Option有記錄目前接收方收到封包中最高的編號,稱之為forward ack。相對於其他的TCP以
        重複的ack來估計網路上的資料數,FACK演算法使用SACK Option來明確的計算網路上的封包數,當小於cwnd
        時就可以繼續傳送資料。]]

     4)非对称性考虑
   信道不对称问题的一个有效的解决办法是确保适当的反向带宽并使用充分大的分组。否则,增加的前向缓在
     要求处理较大的线性速率突发错误。

posted @ 2005-10-18 10:01 zfly 阅读(1062) | 评论 (0) | 编辑 收藏
 
最近学习笔记,不断更新中... ...

1. c++中类的相互引用
 
  原则是:相互引用的class要分别写.h和.cpp文件(分别合用一个.h,.cpp也可)
         在.h文件中只需申明class类型即可,一定不要包含其他类的头文件
         在.cpp文件中必须要包含其他要引用的头件
         不要将函数申明跟寒暑提在同一文件中实现,否则会出意想不到的错误!!!
        
         a.h b.h 合成一个.h文件
         a.cpp b.cpp 合成一个.cpp文件也可
        
  a.h 
  #ifndef _A_
  #define _A_
  
  class b;
  class a;
  
  class a {
   friend class  b;
  private:
   int aa;  
   void a1( b m );
  };
  #endif
   a.cpp
        #include "stdafx.h"
  #include "a.h"
  #include "b.h"
  
  void a::a1(b m )
  { 
   m.bb = 0 ;
  }
   b.h
  #ifndef _B_
  #define _B_
  class  b;
  class a;
  
  class  b
  {
   friend class a;
  private:
   int bb;
   void zzz(a n);
  }; 
  #endif
 b.cpp
     #include "stdafx.h"
  #include "b.h"
  #include "a.h"
  void b::zzz(a m )
  { 
   m.aa = 0 ;
  } 
 
   main.cpp
  #include "stdafx.h"
  #include "a.h"
  #include "b.h"
  
  int main(int argc, char* argv[])
  {
  
      a aa;
      b bb;
  
      return 0;
  }

2. 链表的好用法
   struct a {
     static a *mLinkedList; // 申明为一个静态变量
 
     a *mNext;
     bool mCanRemoteCreate;

    a(bool canRemoteCreate)
    {
       mNext = mLinkedList;
       mLinkedList = this;
       mCanRemoteCreate = canRemoteCreate;
    }
    static int *create(const char *name);
  };
 
  a *a::mLinkedList = NULL; // 初始化
 
3. 灵活的应用# ##
Token-Pasting Operator (##)

#define paster( n ) printf( "token" #n " = %d", token##n )
int token9 = 9;
If a macro is called with a numeric argument like
paster( 9 );the macro yields
printf( "token" "9" " = %d", token9 );which becomes
printf( "token9 = %d", token9 );

Stringizing Operator (#)
#define stringer( x ) printf( #x "\n" )
void main()
{
    stringer( In quotes in the printf function call\n );
    stringer( "In quotes when printed to the screen"\n );  
    stringer( "This: \"  prints an escaped double quote" );
}
Such invocations would be expanded during preprocessing, producing the following code:
void main()
{
   printf( "In quotes in the printf function call\n" "\n" );
   printf( "\"In quotes when printed to the screen\"\n" "\n" );
   printf( "\"This:
\\\" prints an escaped double quote\"" "\n" );
}
When the program is run, screen output for each line is as follows:In quotes in the printf function call
"In quotes when printed to the screen"
"This: \" prints an escaped double quotation mark"
#define IMPLEMENT_NETCONNECTION(className, classGroup, canRemoteCreate) \
   NetClassRep* className::getClassRep() const { return &className::dynClassRep; } \
   NetClassRepInstance<className> className::dynClassRep(#className, 0, NetClassTypeNone, 0); \
   NetClassGroup className::getNetClassGroup() const { return classGroup; } \
   static NetConnectionRep g##className##Rep(&className::dynClassRep, canRemoteCreate)

4. 枚举:初始化为0值开始,后者比前者大1,除非显式指定.
   By default, the first enumerator has a value of 0, and each successive enumerator is one larger
   than the value of the previous one, unless you explicitly specify a value for a particular
   enumerator. Enumerators needn’t have unique values. The name of each enumerator is treated
   as a constant and must be unique within the scope where the enum is defined. An enumerator
   can be promoted to an integer value. However, converting an integer to an enumerator requires
   an explicit cast, and the results are not defined.

=========================
一些优秀的数学算法
5.1 /// Determines if number is a power of two.
 inline bool isPow2(const U32 number)
 {
    return (number & (number - 1)) == 0;
 }
5.2 浮点数的计算机中的储存方法

    单精度      1|   8   |   23    |
             符号  指数      尾数
    双精度      1|   11  |   52    |
             符号  指数      尾数  
            
    10110.100011 -> 1.0110100011* 2(4) 2的4之方
   
    符号位 0
    尾数   0110100011
    指数   4 以过剩127储存 +127= 131  -> 10000011
    所以  IEEE 754 : 0100000110110100011
   
    -0.0010011  -> -1.0011 * 2(-3) 2的-3之方
    符号位:-1
    尾数  : 0011
    指数为:-3  +127  的124 -〉01111100
    所以: 1 01111100 0011000000000000000000
   
    /// Determines the binary logarithm of the input value rounded down to the nearest power of 2.
 inline U32 getBinLog2(U32 value)
 {
    F32 floatValue = F32(value);
    return (*((U32 *) &floatValue) >> 23) - 127;
 }

=========================
模式编程
模版
引用计数
对象指针
构成功能强大

posted @ 2005-10-11 11:05 zfly 阅读(695) | 评论 (0) | 编辑 收藏
 
真费劲!
想辞职了!! 哪个公司想跟我联系??? wuyong201203@yahoo.com.cn
posted @ 2005-09-21 11:58 zfly 阅读(119) | 评论 (0) | 编辑 收藏
 
仅列出标题
共14页: First 6 7 8 9 10 11 12 13 14 
 
<2008年6月>
日一二三四五六
25262728293031
1234567
891011121314
15161718192021
22232425262728
293012345

 公告


我创建的群:21159852,欢迎大家加入! ( Scada,DCS,PLC, RTU,VxWorks, Linux,104,101, DNP,MODBUS ...... )

 导航

  • IT博客
  • 首页
  • 发新随笔
  • 发新文章
  • 联系
  • 管理

 统计

  • 随笔: 137
  • 文章: 3
  • 评论: 97
  • 引用: 0

常用链接

  • 我的随笔
  • 我的评论
  • 我参与的随笔

留言簿(9)

  • 给我留言
  • 查看公开留言
  • 查看私人留言

随笔分类(141)

  • 学习C C++(28) (rss)
  • 本人作品 ---下载(3) (rss)
  • 生活(47) (rss)
  • 知识(47) (rss)
  • 通讯规约(16) (rss)

随笔档案(137)

  • 2009年2月 (1)
  • 2008年6月 (3)
  • 2008年4月 (1)
  • 2008年2月 (2)
  • 2007年12月 (1)
  • 2007年11月 (1)
  • 2007年10月 (2)
  • 2007年9月 (1)
  • 2007年8月 (2)
  • 2007年7月 (4)
  • 2007年6月 (4)
  • 2007年5月 (4)
  • 2007年4月 (6)
  • 2007年3月 (4)
  • 2007年2月 (2)
  • 2007年1月 (1)
  • 2006年12月 (1)
  • 2006年11月 (1)
  • 2006年9月 (6)
  • 2006年8月 (12)
  • 2006年7月 (1)
  • 2006年6月 (6)
  • 2006年5月 (9)
  • 2006年4月 (3)
  • 2006年3月 (10)
  • 2006年2月 (10)
  • 2006年1月 (11)
  • 2005年12月 (12)
  • 2005年11月 (2)
  • 2005年10月 (6)
  • 2005年9月 (8)

文章分类(2)

  • 技术类(2) (rss)

文章档案(3)

  • 2013年1月 (1)
  • 2008年4月 (1)
  • 2005年8月 (1)

相册

  • SCADA

上传

  • about apache
  • 学习工业控制方面 -知识的好网站
  • 学学老外的思想!

最新随笔

  • 1. 好久没上来了哦
  • 2. 看看未来的节能车,61850竟然也包括其通讯
  • 3. News on IEC 61850
  • 4. 口乃心户之门!
  • 5. 活着!
  • 6. 在HK
  • 7. On-business
  • 8. life..
  • 9. AB PLC
  • 10. AN OVERVIEW OF REAL-TIME DATABASE SYSTEMS

搜索

  •  

积分与排名

  • 积分 - 112317
  • 排名 - 66

最新评论

  • 1. re: IEC104
  • 能否给我一份代码呢。最近刚开始学习104规约,邮箱是270797194@qq.com。不胜感激啊
  • --兵兵
  • 2. re: IEC104
  • 请将源码发给我学习学习啊,lucas_woo@gmail.com。谢谢!!!
  • --ww
  • 3. re: 哎,老有人问我Modbus CRC 算法!! 今天贴出代码!!(超值)以后不再回答
  • 嗯,楼主,谢谢了
  • --sanwen
  • 4. re: 下步学习内容!(下一代规约... ... ASN.1不学不行啊!!)
  • 貌似asnlab 的 ADT软件确实不错,编解码速度快,
  • --迷你猫
  • 5. re: 下步学习内容!(下一代规约... ... ASN.1不学不行啊!!)
  • 从来没接触过asn。1,如何可以最快上手?期待,lengshu66@163.com
  • --me

阅读排行榜

  • 1. Cygwin(或linux) 中环境变量的配置(要区分BASH 和TCSH)(5017)
  • 2. 下步学习内容!(下一代规约... ... ASN.1不学不行啊!!)(4993)
  • 3. 哎,老有人问我Modbus CRC 算法!! 今天贴出代码!!(超值)以后不再回答(4844)
  • 4.  1M带宽到底是多少?和磁盘的1M有什么区别?(4618)
  • 5. 查看. ODT文件! 同时支持windows 和linux(3455)

评论排行榜

  • 1. IEC104(27)
  • 2. 下步学习内容!(下一代规约... ... ASN.1不学不行啊!!)(11)
  • 3. http://www.xialala.com/(11)
  • 4.  1M带宽到底是多少?和磁盘的1M有什么区别?(10)
  • 5. 查看. ODT文件! 同时支持windows 和linux(6)

Powered by: 博客园
模板提供:沪江博客
Copyright ©2025 zfly