⭐ 欢迎来到虫虫下载站! | 📦 资源下载 📁 资源专辑 ℹ️ 关于我们
⭐ 虫虫下载站

📄 readme.txt

📁 SMSC NET controller 9218 driver software
💻 TXT
📖 第 1 页 / 共 4 页
字号:


tx_Csum

	A non zero value specifies that the TX hardware checksum offload feature is enable.

        A zero value specifies that the feature is disable.By default it is set as enable.

	Only boylston Lite (LAN9211) can support this feature.



rx_Csum

	A non zero value specifies that the RX hardware checksum offload feature is enable.

        A zero value specifies that the feature is disable.By default it is set as enable.

	Only boylston Lite (LAN9211) can support this feature.



Scatter_gather

	A non zero value specifies that the hardware supports scatter gather.

        A zero value specifies that the feature is disable.By default it is set as enable.
	Only boylston Lite (LAN9211) can support this feature.


AutoMdix
	0: Override Strap, Disable AutoMdix, Using Straight Cable
	1: Override Strap, Disable AutoMdix, Using CrossOver Cable
	2: Override Strap, Enable AutoMdix
	>=3 or No Keyword: AutoMdix controlled by Strap


###########################################################


################# RX PERFORMANCE TUNING ###################


###########################################################





Under most real world conditions traffic is flow controlled at


the upper layers of the protocol stack. The most common example is 


TCP traffic. Even UDP traffic is usually flow controlled in the sense 


that the transmitter does not normally send continuous wirespeed traffic.


When high level flow control is in use, it usually produces the


best performance on its own with no intervention from the driver.


But if high level flow control is not in use, then the driver will be 


working as fast as it can to receive packets and pass them up to 


the OS. In doing so it will hog CPU time and the OS may drop packets


due to lack of an ability to process them. It has been found that


during these heavy traffic conditions, throughput can be significantly


improved if the driver voluntarily releases the CPU. Yes, this will


cause more packets to be dropped on the wire but a greater number


of packets are spared because the OS has more time to process them.





As of version 1.05 and later, the driver implements a new flow control


detection method that operates like this. If the driver detects an


excessive work load in the period of 100mS, then the driver


assumes there is insufficient flow control in use. Therefor it will 


turn on driver level flow control. Which significantly reduces the 


amount of time the driver holds the CPU, and causes more packets to


be dropped on the wire. So a balancing/tuning, needs to be performed


on each system to get the best performance under these conditions. 


If however the driver detects a tolerable work load in a period of 


100mS then it assumes flow control is being managed well and does 


not attempt to intervene by releasing the CPU. Under these conditions


the driver will naturally release the CPU to the OS, since the system


as a whole is keeping up with traffic.





Now that the background has been discussed, its necessary to talk 


about how the driver implements flow control. The method used is a 


work load model. This is similar but not exactly the same as 


throughput. Work load is the sum of all the packet sizes plus a constant


packet cost for each packet. This provides more balance. For instance


a stream of max size packets will give a large throughput with fewer


packets, while a stream of min size packets will give a low throughput


with many packets. Adding a per packet cost to the work load allows these


two cases to both activate and manage flow control. If work load only 


counted packet sizes then only a stream of max size packets would


activate flow control, while a stream of min size packets would not.





There are five primary parameters responsible for managing flow control.


They can be found in the FLOW_CONTROL_PARAMETERS structure. 


They are


	MaxThroughput, Represents the maximum throughput measured in a 


		100mS period, during an rx TCP_STREAM test.		


	MaxPacketCount, Represents the maximum packet count measured in a 


		100mS period, during an rx TCP_STREAM test.


	PacketCost, Represents the optimal per packet cost


	BurstPeriod, Represents the optimal burst period in 100uS units.


	IntDeas, This is the value for the interrupt deassertion period.


	    It is set to INT_DEAS field of the INT_CFG register.


The driver will call Platform_GetFlowControlParameters to get these





parameters for the specific platform and current configuration.


These parameters must be carefully choosen by the driver writer so


optimal performance can be achieved. Therefor it is necessary to 


describe how the driver uses each parameter.





The first three parameters MaxThroughput, MaxPacketCount, and PacketCost,


are used to calculate the secondary parameter, MaxWorkLoad according


to the following formula.


	MaxWorkLoad=MaxThroughput+(MaxPacketCount*PacketCost);


MaxWorkLoad represents the maximum work load the system can handle during


a 100mS period.


	The driver will use MaxWorkLoad and BurstPeriod to determine 


BurstPeriodMaxWorkLoad, which represents the maximum amount of work the 


system can handle during a single burst period. It is calculated as follows


	BurstPeriodMaxWorkLoad=(MaxWorkLoad*BurstPeriod)/1000;





Every 100mS the driver measures the CurrentWorkLoad by the following 


algorithm.





	At the beginning of a 100mS period


		CurrentWorkLoad=0;


		


	During a 100mS period CurrentWorkLoad is adjusted when a packet


	arrives according to the following formula.


		CurrentWorkLoad+=PacketSize+PacketCost;


		


	At the end of a 100mS period


		if(CurrentWorkLoad>((MaxWorkLoad*(100+ActivationMargin))/100)) 


		{


			if(!FlowControlActive) {


				FlowControlActive=TRUE;


				//Do other flow control initialization


				BurstPeriodCurrentWorkLoad=0;


			}


		}


		if(CurrentWorkLoad<((MaxWorkLoad*(100-DeactivationMargin))/100))


		{


			if(FlowControlActive) {


				FlowControlActive=FALSE;


				//Do other flow control clean up


				Enable Receiver interrupts


			}


		}





During periods where flow control is active, that is 


FlowControlActive==TRUE, the driver will manage flow control by the


following algorithm.





	At the end/beginning of a burst period


		if(BurstPeriodCurrentWorkLoad>BurstPeriodMaxWorkLoad) {


			BurstPeriodCurrentWorkLoad-=BurstPeriodMaxWorkLoad;


		} else {


			BurstPeriodCurrentWorkLoad=0;


		}


		Enable Receiver Interrupts;


		


	When checking if a packet arrives


		if(BurstPeriodCurrentWorkLoad<BurstPeriodMaxWorkLoad) {


			//check for packet normally


			BurstPeriodCurrentWorkLoad+=PacketSize+PacketCost;


		} else {


			//Do not check for packet, but rather


			//  behave as though there is no new packet.


			Disable Receiver interrupts


		}





This algorithm will allow the driver to do a specified amount


of work and then give up the CPU until the next burst period. Doing this


allows the OS to process all the packets that have been sent to it.





So that is generally how the driver manages flow control. For more


detail you can refer to the source code. Now it is necessary to 


describe the exact method for obtaining the optimal flow control 


parameters.





When obtaining the optimal flow control parameters it is important


to note the configuration you are using. Generally there are 8


configurations for each platform. They involve the following options


	DMA or PIO


	16 bit or 32 bit


	118/117/112 or 116/115


Some platforms may only use 16 bit mode. While other platforms may 


have a selectable clock rate. What ever the options are, every combination


should be identifiable, and Platform_GetFlowControlParameters should be


implemented to provide the correct flow control parameters. It is important


to be sure that the carefully selected parameters are applied to the


same configuration used during tuning.





Flow control tuning requires a publically available program called


netperf, and netserver, which are a client/server pair. These are built 


with the same make file and can be found on the web.





Fortunately, as of version 1.10, smsc9118 and cmd9118 supports an automated


tuning mechanism. The process takes about one hour and can be initiated as


follows





AUTOMATED TUNING:





Choose the configuration you want to tune. 


That is choose between


  DMA or PIO,


  16 bit or 32 bit,


  118/117/112 or 116/115,


Make sure there is a direct connection between the target platform


and the host platform. Do not use a hub, or switch. The target


platform is the platform that will run this driver. The host platform


should be a PC easily capable of sending wire speed traffic.





Install the driver on your target platform with your choosen configuration.


	insmod smsc9118d.o 


	ifconfig eth1 192.1.1.118


load servers on target platform


	netserver


	cmd9118 -cSERVER


On host platform, make sure the netperf executable is 


located in the same directory and the cmd9118 executable. While in that


directory run the following.


	cmd9118 -cTUNER -H192.1.1.118


This command, if successful, will begin the one hour tuning process.


At the end you will get a dump of the optimal flow control parameters.


The key parameters needed are


    MaxThroughput


    MaxPacketCount


    PacketCost


    BurstPeriod


    IntDeas


These value must be assigned in Platform_GetFlowControlParameters to


	flowControlParameters->MaxThroughput


	flowControlParameters->MaxPacketCount


	flowControlParameters->PacketCost


	flowControlParameters->BurstPeriod


	flowControlParameters->IntDeas


Make sure the Platform_GetFlowControlParameters checks the current configuration


and will only set those parameters if the current configuration matches the


configuration you tuned with.





Next start over but choose a configuration you haven't already tuned.


   





MANUAL TUNING:


In the off chance that the automated tuning fails to work properly, you may


use the following manual tuning procedure.





STEP 1:


	Select a configuration. That is choose between DMA or PIO, 16 bit or 


	32 bit, 118/117/112 or 116/115.


	Make sure there is a direct connection between the target platform


	and the host platform. Do not use a hub, or switch. The target


	platform is the platform that will run this driver. The host platform


	should be a PC easily capable of sending wire speed traffic.





STEP 2:


	load the driver on the target platform with the following commands


		insmod smsc9118.o max_work_load=0 int_deas=ID


		ifconfig eth1 192.1.1.118


		netserver


	ID will be replated by the number you will be adjusting to obtain the


		best throughput score in STEP 3, initially a goog number to start


		with is 0.





STEP 3:


	On  the host platform run the following command


		netperf -H192.1.1.118 


	Examine the output. The goal is to maximize the number on the 


		Throughput column.


	If you are satisfied with the throughput remember the ID number


		you used and move on to STEP 4.


	If you would like to try improving the throughput


		unload the driver on the target with


			ifconfig eth1 down


			rmmod smsc9118


		goto STEP 2 and use a different value for ID





	


STEP 4:


	unload the driver with 


		ifconfig eth1 down


		rmmod smsc9118


	load driver on the target platform with the following commands


		insmod smsc9118.o max_work_load=0 int_deas=ID


		ifconfig eth1 192.1.1.118


		netserver


	NOTE: the driver will be making traffic measurements. Therefor


		it is important not to insert any steps between 4 and 6.


		


STEP 5:	


	run netperf on the host platform


		netperf -H192.1.1.118


	repeat two more times.


	


STEP 6:


	on target platform run the following


		cmd9118 -cGET_FLOW


	Many variables will be displayed. Two of them are measurements we need.


	You can set the following two parameters as follows.


		MaxThroughput  = RxFlowMeasuredMaxThroughput;


		MaxPacketCount = RxFlowMeasuredMaxPacketCount;


		


STEP 7:


	Unload the driver on target platform with


		ifconfig eth1 down


		rmmod smsc9118


	Apply the parameters obtained in STEP 6 and 2/3 to the appropriate location,


		given the configuration choosen in STEP 1, in


		Platform_GetFlowControlParameters. The parameters for your


		choosen configuration should be set as follows


			MaxThroughput = (RxFlowMeasuredMaxThroughput from step 6);


			MaxPacketCount = (RxFlowMeasuredMaxPacketCount from step 6);


			PacketCost=0; //temporarily set to 0


			BurstPeriod=100; //temporarily set to 100


			IntDeas= (ID from step 2/3).


	recompile driver.


		


STEP 8:


	Again make sure your still using the same configuration you selected


		in STEP 1.


	Load recompiled driver on target platform with the following commands


		insmod smsc9118.o burst_period=BP


		ifconfig eth1 192.1.1.118


	BP will be replaced by the number you will be adjusting to obtain the


		best throughput score in STEP 9, initially a good number to 


		start with is 100.





STEP 9:


	On Host platform run the following command


		netperf -H192.1.1.118 -tUDP_STREAM -l10 -- -m1472


	Examine the output.	The goal is to maximize the lower 


		number on the Throughput column.


	If you are satisfied with the throughput remember the BP number 


		you used and move on to STEP 10.


	If you would like to try improving the throughput


		unload the driver on the target with


			ifconfig eth1 down


			rmmod smsc9118


		goto STEP 8 and use a different value for BP.


		


STEP 10:


	unload the driver on target platform


		ifconfig eth1 192.1.1.118 down


		rmmod smsc9118


	Again make sure your still using the same configuration you selected


		in STEP 1.


	Load the recompiled driver from STEP 7 on target platform with 


⌨️ 快捷键说明

复制代码 Ctrl + C
搜索代码 Ctrl + F
全屏模式 F11
切换主题 Ctrl + Shift + D
显示快捷键 ?
增大字号 Ctrl + =
减小字号 Ctrl + -