📄 tier4.cl.bak
字号:
188738|port %2FDomain security enhancements to TUX8.0|domain%2Fsecurity|8.0|291|Hong-Hsi Lo|Jian Dong|IB|N%2FA|closed|194482|The functionality and design of the security enhancement are beyond my%0D%0Aknowledge. What I do know is that these CRs introduced buggy code to Tuxedo%0D%0A7.1%2F8.0%2F8.1 so that dmadmin crashes on some 64-bit platforms. When Jian found%0D%0Athe problem in 8.1%2C tens of rolling patches containing the bug has been%0D%0Adelivered already. Further investigation shows that 7.1 and 8.0 are affected%0D%0Atoo.%0D%0A%0D%0AThe code should have been better crafted and%2For tested.%0D%0A%0D%0ACR194482 was filed to track this problem. Mongmou solved this CR in the%0D%0Afollowing rolling patches%3A%0D%0A8.1%3A RP130%0D%0A8.0%3A RP305%0D%0A7.1%3A RP318|11%2F12%2F2004
190423|Port the %2FDomain enhancements to Tuxedo 8.1|domain%2Fsecurity|8.1|118|Hong-Hsi Lo|Jian Dong|IB|N%2FA|closed|194482|The functionality and design of the security enhancement are beyond my%0D%0Aknowledge. What I do know is that these CRs introduced buggy code to Tuxedo%0D%0A7.1%2F8.0%2F8.1 so that dmadmin crashes on some 64-bit platforms. When Jian found%0D%0Athe problem in 8.1%2C tens of rolling patches containing the bug has been%0D%0Adelivered already. Further investigation shows that 7.1 and 8.0 are affected%0D%0Atoo.%0D%0A%0D%0AThe code should have been better crafted and%2For tested.%0D%0A%0D%0ACR194482 was filed to track this problem. Mongmou solved this CR in the%0D%0Afollowing rolling patches%3A%0D%0A8.1%3A RP130%0D%0A8.0%3A RP305%0D%0A7.1%3A RP318|11%2F12%2F2004
33977|Tux-65 NT %2FWS client ULOG loops with errors when timeout with NOTIFY%3DSIGNAL|ws gpnet|Since 6.5 GA||Liang Luo|Huan Zhou|IB|8.0%2F301|open%28partly%29|189869%2F020645|For a %2FWS client%2C each SIGPOLL signal triggers the client to receive messages%0D%0Afrom network%2C including unsolicited notifications. The situation of CR033977%0D%0Ais as follows%3A%0D%0A* The %2FWS client handles unsolicited message using SIGNAL mode.%0D%0A* After tpinit%28%29%2C the network connection was closed by WSH before the client%0D%0A calls tpcall%28%29.%0D%0AThe problem reported in CR033977 is that in above case%2C the %2FWS client%27s ULOG%0D%0Awill be flooded with messages indicating network message receive failure. The%0D%0Aproblem happens on Windows only.%0D%0A%0D%0AOn Windows%2C the gpnet will convert winsock%27s message into simulated signal%0D%0A%28SIGPOLL%29. For some reason%2C the signal floods the application. %28It doesn%27t%0D%0Aseem to be a Windows bug%2C since my research results shows that in normal case%0D%0Athe message is delivered only once%2C just as the documentation says. My guess%0D%0Ais that it%27s related to the way %2Fws code switches the socket between sync and%0D%0Aasync mode while handling the message.%29 However%2C the solution of CR033977 was%0D%0Aignoring the FD_CLOSE message at the gpnet layer %28in PollWndProc%28%29%29%21 That%0D%0Ameans all processes using gpnet is not able to be signaled when the socket is%0D%0Aclosed by peer or due to network error.%0D%0A%0D%0AThe side effect was finally exposed by CR189860 because ISH relies on the%0D%0Anetwork event to know a remote corba client has logged off. The solution was%0D%0Ato revert gpnet%27s fix from CR033977 and fix CR033977 in %2Fws code. However%2C I%0D%0Adidn%27t plan to propagate CR189860%27s fix to other releases. So CR033977%27s side%0D%0Aeffect still exists in other releases.%0D%0A|11%2F12%2F2004
62310|Tux 8.0/Windows - No exception thrown instead CORBA clients are timed out when MAXWSCLIENTS is exceeded|ISH|wle5.1(prop'd to 8.0/092)|101|Jeff Michaud (prop'd to 8.0 by Wang Lai Li)|Huan Zhou|IB|8.0/301|Open (Partly)|189745/189860/059998|Two variables are used by ISL%2FISH for client dispatching%3A%0D%0A* timestamp is used by ISL to indicate if a ISH is busy doing handshake with %0D%0A a remote client. When a ISL assigns a connection to a WSH%2C it sets timestamp%0D%0A of that ISH to current time. When a ISH finished handshaking with the%0D%0A client%2C it reset its timestamp to zero so that ISL can assign another client%0D%0A to him.%0D%0A* num_clts indicates how many remote clients a ISH is handling now. When the %0D%0A value reaches max_clts%2C ISL won%27t assign any remote clients to the ISH.%0D%0A%0D%0AIn 8.0 GA%2C timestamp is cleared in _iiop_nwmsg_rcv%28%29%2C init_nwmsg_rcv%28%29 and%0D%0Asslplus_cipher_notify_callback%28%29%2C while num_clts is changed in _wsh_mkcntxt%28%29%0D%0Aand _wsh_rmcntxt%28%29. This design opens a time window during which timestamp is%0D%0Acleared while num_clts is not increased. The problem was exposed first in%0D%0Awle5.0 %28CR059998%29 and then in wle5.1 %28CR062310%29. Jeff%27s fix to CR059998 is%0D%0Amoving timestamp operation back to _wsh_mkcntxt%28%29. This fix is not of interest%0D%0Aof this document.%0D%0A%0D%0AIn his fix to CR062310%2C Jeff believed he had figured a better solution that%0D%0Acould improve performance significantly. The solution was%3A%0D%0A* wsh_accepted%28%29 will clear timestamp and increase num_clts%0D%0A* wsh_close%28%29 will decrease num_clts%0D%0A* all other statements regarding timestamp%2Fnum_clts are removed%0D%0A%0D%0AHowever%2C this solution introduced more dependence on network as follows%3A When%0D%0Aa remote client fails during tpinit%28%29%2C _wsh_rmcntxt%28%29 is called. But with this%0D%0Afix%2C num_clts is decreased until ISH detects that the client has closed the%0D%0Anetwork connection. Due to CR189860%2C it could take a couple of minutes before%0D%0Athe ISH detects the connection is gone. During this time window%2C the ISH is%0D%0Anot able to handle remote corba connections.%0D%0A%0D%0ADue to above history, I couldn't figure a clean way to solve the problem%0D%0Awithout breaking any reliance on legacy rolling patches. But since CR189860 is%0D%0Afixed for 8.0, this problem should not practically happen, although this is%0D%0Anot a perfect solution.%0D%0AThe side effect of CR062310 still exists in wle5.1.|11%2F12%2F2004
18009|FML filter specfications do not working correctly in Tuxedo EventBroker%0D%0A|FML|6.5%28prop%27d from WLE5.0%29|424|rahul|Qiongyu Hu|IB|6.5%2F458|closed|186240%2F187756%2F053975|There are two problems regarding operations on Fboolev%3A The unary operation%0D%0Aand the relational operation. In the GA version%2C Fboolev works not correctly%0D%0Afor unary operator%2C such as%3A if D_SHORT is not present in a FML buffer%2C but%0D%0A%21D_SHORT is 0%2C which is not correctly. The relational operation is incorrect%0D%0Atoo in GA version%3A If D_SHORT is not present in a FML buffer%2C %28D_SHORT %3C 1%29%0D%0Awill be evaluated to 1 %28true%29%2C which should be 0 %28false%29.%0D%0A%0D%0AServeral CRs have effect on these two behavior%3A%0D%0A Unary Relational%0D%0AGA Wrong Wrong%0D%0ARP285%28CR053975%29 Correct Wrong%0D%0ARP424%28CR018009%29 Wrong Correct Fixed Relational%2C broke CR053975%0D%0ARP458%28CR186240%29 Correct Correct Fixed CR018009%27s regression%0D%0ARP460%28CR187756%29 Correct Compatible Provide prior RP424%27s behavior for%0D%0A compatibility%2C controled by EV%0D%0A|11%2F12%2F2004
176747|TUX8.0%3A numerous T_SERVER requests locking the BB for too long a time|TMIB|8.0|270|Mahesh Jagannathan|Leihu Zhang|IB|8.0%2F304|closed|188800%2F193630|Original CR176747 is for better performance when accessing information for%0D%0Aservers using TMIB%2C especially when thousands of servers are deployed in an%0D%0Aapplication. The logic is to read SERVERS section of tuxconfig into memory%0D%0Acahce%2C and use data from cache to answer TMIB calls. This will prevent reading%0D%0Atuxconfig file too many times.%0D%0A%0D%0AThe problem of that fix is that information of system servers%2C such as BBL%2C%0D%0ADBBL%2C BRIDGE and TMS%2C are not saved in SERVERS section of tuxconfig. So with%0D%0Afix to CR176747%2C information for system servers can not be accessed by TMIB%0D%0Acalls.%0D%0A%0D%0ATo solve the problem without introducing much performance hit%2C function%0D%0A_tmgetsvrparmsfrombb%28%29 is introduced to get information for system servers.%0D%0AThis function%27s functionality is like _tmgetsvrparms%28%29. But it collects%0D%0Ainformation from BB instead of reading from tuxconfig%2C to avoid disk I%2FO while%0D%0ABB is locked.%0D%0A|11%2F12%2F2004
87119|TUX6.4%3A tpunsubsribe%28%29 delete wrong records after migration|tuxedo|6.5%28prop%27d from 6.4%29|383|Lung-Yung Chu|Xiantao Meng|IB|6.5%2F464|closed|193183|The fix to introduced TA_SUBSCRIPTION_HANDLE field in%0D%0Atmsysevt.dat%2Ftmusrvet.dat files. This field is used to ensure that the%0D%0Asubscriber and TMSYSEVT server get the same handler value after rebooting%0D%0ATMSYSEVT.%0D%0A%0D%0AUnfortunately%2C there are other related code assuming that the handler value%0D%0Ashould always generated sequencely from 0. So when it find that some handle%2C%0D%0Asay 6%2C has been used before 5%2C tpsubscribe%28%29 fails and some bad effect%0D%0Ahappens%3A%0D%0A%0D%0A1. TMSYSEVT%2FTMUSREVT may core dump%3A%0D%0AWhen tpsubscribe%28%29 failed%2C TMSYSEVT will release its related information%2C%0D%0Ahowever it didn%27t reset the pointer to NULL so that it will refer to invalid%0D%0Aaddress which leads to core dump during TMSYSEVT loop scan.%0D%0A%0D%0A2. tpsubscribe fail%3A%0D%0ABecause subscription handle are stored in control file%2C if TMSYSEVT reboots%2C%0D%0Ait will restore the subscription handles to memory using the saved%0D%0Ahandle%28TA_SUBSCRIPTION_HANDLE%29. But due to some subscriptions handles removed%0D%0Aby tpunsubscribe%28%29%2C the restored handle sequence %23 are not sequential.%0D%0A%0D%0AWhen one new tpsubscribe%28%29 comes in%2C TMSYSEVT will use the %23 of used%0D%0Asubscription entries plus 1 as subscription handle of new allocated entry%2C%0D%0Ahowever it has no information about used handles %28That means%2C %23 of used%0D%0Asequence %23 %2B 1 may have been used%29%2C so it may allocate one subscription entry%0D%0Awith same subscription handle with existing one. Then tpsubscribe%28%29 will fail.%0D%0A%0D%0ACR193183 solved this regression. The solution was keep aware that higher%0D%0Anumber of handles maybe already used -- just skip this handle in this case.%0D%0A|11%2F12%2F2004
121447|TUX8.0%2FDomain%3A memory leak in transactional scenario%0D%0A|domain%2Ftransaction|8.0|223|Liang Luo|Hongmou Su|IB|8.0%2F308|closed|195786|The idea of fix to CR121447 is not of interest of this document. When Hongmou%0D%0Awas working on CR195786%2C he did identify a bug in CR121447%27s fix%2C in function%0D%0Agw_nw_tms_event%28%29 %28file domain%2Fgwt%2Flibgwt%2Fgwnw_cmt.c%29%3A%0D%0A%0D%0A%3C%3C%3C orignal%0D%0A GWDBG%2850%2C %28%22%3C gw_nw_tms_event%28530%29%2C returns 0%22%29%29%3B%0D%0A return%280%29%3B%0D%0A%0D%0A%3E%3E%3E CR121447%27s fix%3A%0D%0A GWDBG%2850%2C %28%22%3C gw_nw_tms_event%28530%29%2C returns 0%22%29%29%3B%0D%0Aerr_rtn%3A%0D%0A if %28alldata %21%3D NULL %26%26 %28alldata-%3Ead_snd_inuse %3D%3D GW_NULLIDX %7C%7C %0D%0A alldata-%3Ead_snd_inuse %3D%3D act_idx%29%29 %7B%0D%0A gw_nw_fdResumeSend%28_TCARG%2C alldata%29%3B%0D%0A %7D%0D%0A return%280%29%3B%0D%0A%7D%0D%0A%0D%0A%3C%3C%3C The right way should be%3A%0D%0A GWDBG%2850%2C %28%22%3C gw_nw_tms_event%28530%29%2C returns 0%22%29%29%3B%0D%0A %0D%0Aerr_rtn%3A%0D%0A if %28alldata %21%3D NULL %26%26 %28alldata-%3Ead_snd_inuse %3D%3D GW_NULLIDX %7C%7C%0D%0A alldata-%3Ead_snd_inuse %3D%3D act_idx%29%29 %7B%0D%0A gw_nw_fdResumeSend%28_TCARG%2C alldata%29%3B%0D%0A %7D%0D%0A return %28rc%29%3B%0D%0A%7D%0D%0A%0D%0ASo the point of the regression is that when %27err_rtn%27 is executed%2C it should%0D%0Areturn rc instead of return 0.%0D%0A|11%2F12%2F2004
⌨️ 快捷键说明
复制代码
Ctrl + C
搜索代码
Ctrl + F
全屏模式
F11
切换主题
Ctrl + Shift + D
显示快捷键
?
增大字号
Ctrl + =
减小字号
Ctrl + -