No subject


Thu Jan 29 10:34:04 CET 2009


Comment for testing patch:
http://lists.openmoko.org/pipermail/openmoko-kernel/2009-March/009666.html
Changed: prevent rescheduling network queue at interface opened/connected.
Removed: wake network queue at transmit complete.
Added: wake network queue at packet queue limit not reached.

> @@ -1278,6 +1280,7 @@
>          connect.EpCallbacks.EpRecv = ar6000_rx;
>          connect.EpCallbacks.EpRecvRefill = ar6000_rx_refill;
>          connect.EpCallbacks.EpSendFull = ar6000_tx_queue_full;
> +        connect.EpCallbacks.EpSendAvail = ar6000_tx_queue_avail;
>            /* set the max queue depth so that our ar6000_tx_queue_full handler gets called.
>             * Linux has the peculiarity of not providing flow control between the
>             * NIC and the network stack. There is no API to indicate that a TX packet
>             * was sent which could provide some back pressure to the network stack.
>             * Under linux you would have to wait till the network stack consumed all sk_buffs
>             * before any back-flow kicked in. Which isn't very friendly.
>             * So we have to manage this ourselves */
>        connect.MaxSendQueueDepth = 32;
>
>You're not supposed to have a lot of TX packets
> queued at the driver level. You should only queue enough that you have time
> to prepare a new packet for the hardware while the hardware is still busy
> sending the previous packet. If you queue many packets at the driver level,
> the network stack has no chance of reordering the packets. It also won't
> know that your hardware is already fully occupied after 2-3 calls of
> dev->hard_start_xmit(). To me, it looks from the comment above as if the
> ar6000 driver is shooting itself in the foot by maintaining a large queue,
> because doing so causes exactly the problem that the comment complains
> about. netif_stop_queue()/netif_wake_queue() _is_ the flow control.
I not fully understand you...
AR6K architecture small different with traditional NIC. Card flow model mean few flow priorities attached to enpoints, and in every 
endpoint have personal queue for retrive data through SDIO interface to Baseband/NIC. Internal card queue limits, called in driver 
as 'credits', and if we not have hight priority stream packets, credits can be given to stream with low priority (i.e. some QoS). 
Realy we not wait while packet will prepared for transfer, we wait when will free internal card buffer, for retrive next outgoing 
packet. In ideal situation, flow control must be on 'credits' management level, and in system will only skb queue. If need keep 
priority gear, posible need think about few TX quques.
Anyway just like this I imagine AR6K work.

> If you want a large TX queue, look at dev->tx_queue_len. You can check
> the value with ifconfig. Example: ...

$ ifconfig eth1
eth1      Link encap:Ethernet  HWaddr 00:40:17:8E:ED:19
          inet addr:192.168.2.1  Bcast:192.168.2.255  Mask:255.255.255.0
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:162461 errors:0 dropped:0 overruns:0 frame:0
          TX packets:1593670 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:10985972 (10.4 MiB)  TX bytes:2357916964 (2.1 GiB)

>   The default dev->tx_queue_len was raised from 100 to 1000 in 2003-2004
> IIRC.
>
>   FWIW, when I wrote the driver for the NE3200 10 Mbps Ethernet card, I
> found that a TX buffer large enough to hold three full size packets was
> enough to keep the number of queued, unsent packet bytes at least as large
> as the number of packet bytes being transmitted by the chip.
>
>> @@ -2317,7 +2315,7 @@
>>          /* flush data queues */
>>      ar6000_TxDataCleanup(ar);
>>
>> -    netif_wake_queue(ar->arNetDev);
>> +    netif_start_queue(ar->arNetDev);
>>
>>      if ((OPEN_AUTH == ar->arDot11AuthMode) &&
>>          (NONE_AUTH == ar->arAuthMode)      &&
>
>   Out of curiousity, what does this change do?

I mean, what while queue not have actual data, not need reschedule it and raise irq, save bit of time.

netif_start_queue -> clear XOFF state, and allow driver call the device hardwre_start_xmit.
netif_wake_queue -> clear XOFF state -> reschedule qdisc -> raise softirq irqoff.

But, as I write in thread, if it not incidentally, posible not change it

--
Regards Ivan 




More information about the openmoko-kernel mailing list