⭐ 欢迎来到虫虫下载站! | 📦 资源下载 📁 资源专辑 ℹ️ 关于我们
⭐ 虫虫下载站

📄 changelog

📁 生成直角Steiner树的程序包
💻
📖 第 1 页 / 共 5 页
字号:
	because no basis is present.  We avoid this by only deleting the	slack rows when we know we'll be adding new constraints.  When	running with CPLEX, we still delete the slack rows as soon as we	can.	* Re-enabled reduced_cost_var_fixing(), and deleted the comments	that indicate it can't be used.  It wasn't working before because	it was broken, not because the technique is invalid.	* Updated new_lower_bound() and new_upper_bound() to work properly	when the initial lower bound is set to -DBL_MAX instead of 0.	This is to properly handle negative edge weights.	* enumerate_all_subtours() and find_small_subtours() now take the	bbinfo argument, instead of just the current bbnode.	* do_separations() now applies the sec_flow_separator() only to	components for which no violations have been found.	* Modified reduced_cost_var_fixing() to drive off of the node's	"zlb" values.  Modified fix_variables() to not print one line per	variable fixed, but rather print a single summary line at the end	-- if any fixing was done.  When fixing vars, we now set the nodep	-> fixed and nodep -> value masks also.  If we don't do this, then	fixing gets forgotten across suspend/resume of the node.	* Now returning VFIX_FIXED_FRACTIONAL when	reduced_cost_var_fixing() / fix_variables() decides it can fix a	variable that currently has a fractional value.  In this case we	need to go back and re-solve the LP.	* Modified integer_feasible_solution() to accept as valid	solutions that use pruned edges.  It turns out that the heuristic	upper bound routine can produce such trees.  We should consider	ANY tree valid, even if it contains edges known to be suboptimal.	Also added early bailout if total cardinality of all edges in the	solution is bogus.	* Added new utility routine "append_node_to_tree()" to bbsubs.c.	The bbheap_free() routine now frees any unfinished nodes left in	the heap.  Sprouted new destroy_bbinfo() and destroy_bbnode()	routines to shutdown and free up the whole enchilada.	* Added new routines set_floating_point_double_precision(),	save_floating_point_precision(), and	restore_floating_point_precision() as a partial remedy to the	unpredictable precision problem when optimizing floating point	values in registers on the Intel x86 (and similar) architectures.	Using these routines we force the "PC" (precision control) field	of the floating point control word so that most floating point	results are automatically rounded to 53 bits of precision in the	mantissa.  Unfortunately, not all floating point instructions on	the processor obey the PC field setting (e.g., transcendental	functions still always yield a 64 bit result).  Also added checks	to the configure script to automatically detect when this fix is	both needed and works.  If this fix is needed but does NOT work	(e.g., x86 but not Linux) then we use "gcc -ffloat-store" as an	alternate workaround (further changes to configure.in,	aclocal.m4 and Makefile).  The -ffloat-store method isn't a great	alternative, however, because (a) it reduces floating point	efficiency substantially, (b) various versions of gcc have been	known to have bugs wherein -ffloat-store is occasionally ignored.	* Moved definition (i.e., storage allocation) of command line	switch variables from main routines (e.g., bb.c/bbmain.c) to the	file that really REFERENCES them.  This improves modularity by	making it easier to link these files into other programs without	having to define a lot of global variables to resolve link	errors.	* Added two new routines "bmst" and "bmst_terms" (to bmst.c) that	return the actual edges of the bottleneck MST -- instead of only	the MST length.	* Removed the "struct mstadj" from bsd.h.  It is now defined	locally within bsd.c.  Also removed the "mst_edges" and "mark"	members of "struct bsd".  They were not needed to answer BSD	queries, and are now purely local variables in compute_bsd.	Renamed the "pnum" field of "struct mstadj" to be "vnum" so that	this name no longer matches the "pnum" field in "struct point".	* Actually define and call the "shutdown_bsd" routine.	Previously, this routine was declared in bsd.h, but never defined	and never called.  This fixes the memory leak associated witht he	BSD info.	* Split off the expand_constraint routine into a new file	expand.c.  This file now encapsulates all translation of logical	constraints into physical constraint rows.	* Modified CPLEX version of build_initial_formulation() (in	constrnt.c) to scale the objective coefficients by a suitable	power of two and save this in the new lpmem -> obj_scale field so	that we can unscale all CPLEX results.  This is to work around the	unscaled infeasibility errors we get from CPLEX when the objective	contains coefficients with large magnitudes.	* Significant changes to solve_LP_over_constraint_pool() (in	constrnt.c):  Now that each node stores its own LP solution (Z and	X values), add early bailout if the constraint pool has not	changed since the last time we called solve-over-pool.  When	solving an LP, now receive the "x" and "dj" (solution vars and	reduced costs) into local buffers rather than directly into global	branch-and-bound buffers -- this permits proper update of the	node's "solution history" which includes the "z", "x", "zlb", and	"bheur" fields of the current node.  (Now that we have the "zlb"	field, the reduced costs aren't needed anywhere else in the code.)	Don't permit the pool iteration to be incremented to -1 so that we	can use that value to force LP re-solve when branching, fixing	variables, etc.  Added experimental code to delete slack rows	after iterations of solve-over-pool that have significantly	increased the lower bound -- this is useful for extremely large	problems that would run out of memory otherwise.  Added call to	the new prune_pending_rows() routine, which limits the number of	non-zeros added during a single iteration of solve-over-pool -- it	keeps only the sparsest rows that stay under the limit.	* Changes to solve_single_LP() (both the lp_solve and CPLEX	versions):  New calling convention returns the "x" and "dj" arrays	directly to the caller.  No longer update the "bbip -> z" field,	which is now obsolete -- each node is now the definitive	repository for its current objective value and solution data.	(constrnt.c)	* Modified CPLEX version of solve_single_LP() to unscale the	objective value and reduced costs by 2**(lpmem -> obj_scale).	This is to work around the unscaled infeasibility errors we get	from CPLEX when the objective contains coefficients with large	magnitudes.	* Modified reload_cplex_problem to only call _MYCPX_chgbds() if	there are any bounds to be changed.  Now calling _MYCPX_copybase()	instead of the obsolete _MYCPX_loadbase().  (constrnt.c)	* Modified add_constraints() to call prune_pending_rows() before	calling add_pending_rows_to_LP().  This is useful for very large	problems in that it limits the number of non-zeros that enter the	problem on any one LP/separate iteration -- only the sparsest of	the rows actually enter the LP (all rows generated do enter the	pool, however).  (constrnt.c)	* Added the new prune_pending_rows() routine.  It starts by	counting the number of non-zeros in the pending constraints.  The	routine bails out unless this count exceeds some threshold	(currently 2 million non-zeros).  If the caller approves, slack	rows are then deleted from the LP.  The pending rows are then	sorted from most-sparse to most-dense.  Those densest rows that	exceed the threshold are then forced to be non-pending once again.	This helps keep us from running out of memory on some very large	problems.	* Significant modifications to garbage_collect_pool() (in	constrnt.c):  Now counting the non-zeros that are currently	binding for ANY node (active or inactive) -- this is the total	number of pool non-zeros that are currently "useful".  Now using	the new equate "GRACE_TIME" to define the number of "grace"	iterations a constraint is given to become binding.  Now calling	sort_gc_candidates() as late as possible so that we can sometimes	avoid the extra work of sorting all these rows.  The target size	of the pool (in non-zeros) is now 16 times the number of "useful"	non-zeros OR ELSE "Target_Pool_Non_Zeros" (if the user has	specified -z) -- this seems to give more reasonable (i.e., larger)	pool sizes, and also allows the pool to grow more as new nodes are	processed (instead of squeezing ever more nodes' constraints into	a rigid global space limit).  Now bailing out early if the	impending pool size does not exceed the target pool size.	* Disabled the test in the CPLEX version of	delete_slack_rows_from_LP() that avoids deleting slack unless at	least 1/6 of the rows or 10% of the non-zeros would be deleted.	It really seems to be better overall if we always delete slack.	* The restore_node_basis() routine (constrnt.c) may now be called	on nodes that have no basis stored (e.g., when nodep -> cstat and	rstat are NULL).  This now happens when we "activate" the root	node the very first time.  In the CPLEX case, we now call	_MYCPX_copybase() to do this instead of the obsolete	_MYCPX_loadbase().	* Added new files cra.c and cra.h to compute Closest Rational	Approximations.  Used by the new branch variable ordering	heuristic.	* Split off add_cutset_to_list() and is_subset() routines	(originally in cutset.c) into the new cutsubs.c file.  Changed the	calling convention of add_cutset_to_list() so that it determines	the set of edges that span the cut internally rather having the	caller do this.  Got rid of the identify_cutset() routine, since	that was yet another copy of the code to determine the edges	spanning the cut.  The add_cutset_to_list() routine now	encapsulates all that we do when a cutset violation is detected.	* Modify calling convention of simple_cuts() and enumerate_cuts()	(in cutset.c) to take the LP solution instead of a temporary	cutset buffer.	* Removed the file-scope "debug_flag" from cutset.c -- definitely	obsolete.  This was used long ago to enable additional messages	without recompiling by setting debug_flag = 1 with a debugger.	* Modified the "dumpfst" program (in dumpfst.c) to implement the	new -d and -h options.  The -d flag gives a statistical summary of	the FSTs.  The -h flag gives a histogram of the number of FSTs of	by cardinality.  If neither -d nor -h is given, then "dumpfst"	functions as in the previous release.  Now using the common global	sort_ints routine, and also freeing up the phase 1 data when we	are done.	* Numeric stability improvements to efst.c (round 1): Use a	dynamically computed epsilon value for floating point	comparisons.	* Numeric stability improvements to efst.c (round 2): Translated	the initial point set to a new origin computed to be "central" to	the point set and have coordinates with relatively few significant	bits.  This maximizes the numeric resolution with respect to the	given data.	* Numeric stability improvements to efst.c (round 3): Perform all	computations for an eq-point with respect to an origin located at	one of its terminals.  This again maximizes the precision	available with respect to the given data.	* Numeric stability improvements to efst.c (round 4): Encapsulated	all movement of Steiner arc endpoints into new routines	(test_and_save_LP and test_and_save_RP) that are now much more	careful to make sure that the arc is not shortened any more than	can be justified numerically.	* Numeric stability improvements to efst.c (round 5): Introduced	the use of GMP (when available and requested via the -m Level	switch) to make sure that the final locations of all *accepted*	eq-points, and the lengths of all generated EFSTs are correct to	within 1/2 ULP.  (These computations are encapsulated within the	new files egmp.c and egmp.h.)  When GMP is not used, errors in the	location of eq-points can accumulate as the trees of terminals	comprising them get larger.  These accumulated errors also affect	the length of the Simpson line, and therefore also the final EFST	lengths.	* Copy description string to heap in main() of efst.c and rfst.c	so that it can get freed with the rest of the cinfo.  Also,	disable freeing of the point set that gets freed with the cinfo --	we don't want to free it twice.  Now freeing up the cinfo and	other data structures that were previously being leaked.	* Added some memsets at key places (in efst.c) so that structures	(e.g. struct point) that have "holes" in them get completely	initialized.  Doing block copies of uninitialized memory causes	warning messages when we use "purify" to find memory leaks.	* Modified efst.c so that the -e flag now specifies the INITIAL	allocation of equilateral points.  When this is not enough, the	arrays are dynamically reallocated and updated.	* Introduced new type "eterm_t" (in efst.h) used to store the	indices of terminals that comprise each eq-point.  Making this be

⌨️ 快捷键说明

复制代码 Ctrl + C
搜索代码 Ctrl + F
全屏模式 F11
切换主题 Ctrl + Shift + D
显示快捷键 ?
增大字号 Ctrl + =
减小字号 Ctrl + -