⭐ 欢迎来到虫虫下载站! | 📦 资源下载 📁 资源专辑 ℹ️ 关于我们
⭐ 虫虫下载站

📄 constrnt.c

📁 生成直角Steiner树的程序包
💻 C
📖 第 1 页 / 共 5 页
字号:
/* * Prune back the pending rows so that only the smallest of these rows * are added to the LP tableaux the first time around.  (The larger rows * that are "pruned" stay in the pool, however.)  We hope that by adding * only the small (i.e. sparse) rows that the following happens: * *	1.	The resulting LP solves more quickly. *	2.	The solution perturbs sufficiently so that *		most of the dense rows we avoided adding are *		now slack. * * In other words, why bog down the LP solver with lots of dense rows * that will become slack with the slightest perturbation of the * LP solution?  If we hold off, we may NEVER have to add them! * * This pruning process actually seems to make us run SLOWER in practice, * probably because it usually results in at least one more scan over * the pool and one more LP call before "solve-LP-over-pool" terminates. * Therefore, we only do this pruning in cases where the pending rows have * an extra large number of non-zero coefficients.  When this happens we * trim off the largest rows until the threshold is met. */	static	voidprune_pending_rows (struct bbinfo *		bbip,		/* IN - branch-and-bound info */bool			can_del_slack	/* IN - OK to delete slack rows? */){int		i;int		k;int		n;int		h;int		tmp;int		key;struct cpool *		pool;int *		p1;int *		p2;int *		p3;int *		p4;int *		endp;int *		parray;int		total;struct rcon *	rcp;#define THRESHOLD (2 * 1000 * 1000)	pool = bbip -> cpool;	/* Get address and count of pending LP rows... */	n = pool -> npend;	parray = &(pool -> lprows [pool -> nlprows]);	endp = &parray [n];	total = 0;	for (i = 0; i < n; i++) {		tmp = parray [i];		total += pool -> rows [tmp].len;		if (total > THRESHOLD) break;	}	if (total <= THRESHOLD) {		/* Don't bother pruning anything back... */		return;	}	if (can_del_slack) {		/* To help keep memory from growing needlessly,	*/		/* get rid of any slack rows first...		*/		delete_slack_rows_from_LP (bbip);		parray = &(pool -> lprows [pool -> nlprows]);		endp = &parray [n];	}	/* Sort rows in ascending order by number of non-zero coeffs... */	for (h = 1; h <= n; h = 3*h+1) {	}	do {		h = h / 3;		p4 = &parray [h];		p1 = p4;		while (p1 < endp) {			tmp = *p1;			key = pool -> rows [tmp].len;			p2 = p1;			while (TRUE) {				p3 = (p2 - h);				if (pool -> rows [*p3].len <= key) break;				*p2 = *p3;				p2 = p3;				if (p2 < p4) break;			}			*p2 = tmp;			++p1;		}	} while (h > 1);	/* Find the first I constraints having fewer than the	*/	/* threshold number of coefficients.			*/	total = 0;	for (i = 0; i < n; i++) {		tmp = parray [i];		total += pool -> rows [tmp].len;		if (total > THRESHOLD) break;	}	/* Make pending rows i through n-1 not be pending any more... */	for (k = i; k < n; k++) {		tmp = parray [k];		rcp = &(pool -> rows [tmp]);		if (rcp -> lprow NE -2) {			fatal ("prune_pending_rows: Bug 1.");		}		rcp -> lprow = -1;	}	/* Reduce the number of pending rows to be only the small ones	*/	/* we are going to keep...					*/	pool -> npend = i;#undef THRESHOLD}/* * This routine determines whether or not the given physical constraint * coefficients are violated by the given solution X. */	boolis_violation (struct rcoef *		cp,		/* IN - row of coefficients */double *		x		/* IN - LP solution */){int			var;double			sum;	sum = 0.0;	for (;;) {		var = cp -> var;		if (var < RC_VAR_BASE) break;		sum += (cp -> val * x [var - RC_VAR_BASE]);		++cp;	}	switch (var) {	case RC_OP_LE:		return (sum > cp -> val + FUZZ);	case RC_OP_EQ:		sum -= cp -> val;		return ((sum < -FUZZ) OR (FUZZ < sum));	case RC_OP_GE:		return (sum + FUZZ < cp -> val);	default:		fatal ("is_violation: Bug 1.");		break;	}	return (FALSE);}/* * This routine computes the amount of slack (if any) for the given * coefficient row with respect to the given solution X. */	static	doublecompute_slack_value (struct rcoef *		cp,		/* IN - row of coefficients */double *		x		/* IN - LP solution */){int			var;double			sum;	sum = 0.0;	for (;;) {		var = cp -> var;		if (var < RC_VAR_BASE) break;		sum += (cp -> val * x [var - RC_VAR_BASE]);		++cp;	}	switch (var) {	case RC_OP_LE:		return (cp -> val - sum);	case RC_OP_EQ:		sum -= cp -> val;		if (sum > 0.0) {			/* No such thing as slack -- only violation! */			sum = -sum;		}		return (sum);	case RC_OP_GE:		return (sum - cp -> val);	default:		fatal ("compute_slack_value: Bug 1.");		break;	}	return (0.0);}/* * This routine performs a "garbage collection" on the constraint pool, and * is done any time we have too many coefficients to fit into the alloted * pool space. * * Constraints are considered for replacement based upon the product of * their size and the number of iterations since they were effective * (i.e. binding).  We *never* remove "initial" constraints, since they * would never be found by the separation algorithms.  Neither do we * remove constraints that have been binding sometime during the most * recent few iterations. */	static	voidgarbage_collect_pool (struct cpool *		pool,		/* IN - constraint pool */int			ncoeff,		/* IN - approx num coeffs needed */int			nrows		/* IN - approx num rows needed */){int			i;int			j;int			k;int			maxsize;int			minrow;int			count;int			time;int			nz;int			target;int			impending_size;int			min_recover;struct rcon *		rcp;int *			cnum;int32u *		cost;bool *			delflags;int *			renum;int *			ihookp;struct rblk *		blkp;struct rcoef *		p1;struct rcoef *		p2;struct rcoef *		p3;struct rblk *		tmp1;struct rblk *		tmp2;	tracef (" %% Entering garbage_collect_pool\n");	print_pool_memory_usage (pool);	if (pool -> npend > 0) {		fatal ("garbage_collect_pool: Bug 0.");	}	maxsize = pool -> nrows - pool -> initrows;	cnum	= NEWA (maxsize, int);	cost	= NEWA (maxsize, int32u);	/* Count non-zeros in all constraints that are binding	*/	/* for ANY node.  This is the total number of pool	*/	/* non-zeros that are currently "useful".		*/	nz = 0;	for (i = 0; i < pool -> nrows; i++) {		rcp = &(pool -> rows [i]);		if ((rcp -> lprow NE -1) OR		    (rcp -> refc > 0)) {			/* This row is in (or on its way to) the LP,	*/			/* or is binding for some suspended node.	*/			nz += (rcp -> len + 1);		}	}	count = 0;	for (i = pool -> initrows; i < pool -> nrows; i++) {		rcp = &(pool -> rows [i]);		if (rcp -> lprow NE -1) {			/* NEVER get rid of any row currently in (or on	*/			/* its way to) the LP tableaux!			*/			continue;		}		if (rcp -> refc > 0) {			/* NEVER get rid of a row that is binding for	*/			/* some currently suspended node!		*/			continue;		}		if ((rcp -> flags & RCON_FLAG_DISCARD) NE 0) {			/* Always discard these! */			cnum [count]	= i;			cost [count]	= INT_MAX;			++count;			continue;		}		time = pool -> iter - rcp -> biter;#if 1#define	GRACE_TIME	10#else#define GRACE_TIME	50#endif		if (time < GRACE_TIME) {			/* Give this constraint more time in the pool.	*/			continue;		}#undef GRACE_TIME		/* This row is a candidate for being deleted! */		cnum [count]	= i;		cost [count]	= (rcp -> len + 1) * time;		++count;	}	if (count <= 0) {		free ((char *) cost);		free ((char *) cnum);		return;	}	/* Determine how many non-zeros to chomp from the pool.	*/#if 0	/* What we want is for the pool to remain proportionate	*/	/* in size to the LP tableaux, so our target is a	*/	/* multiple of the high-water mark of non-zeros in the	*/	/* LP tableaux.						*/	target = 4 * pool -> hwmnz;#else	/* What we want is for the pool to remain proportionate	*/	/* in size to the number of non-zeros currently being	*/	/* used by any node.					*/	target = 16 * nz;#endif	if (Target_Pool_Non_Zeros > 0) {		target = Target_Pool_Non_Zeros;	}	impending_size = pool -> num_nz + ncoeff;	if (impending_size <= target) {		free ((char *) cost);		free ((char *) cnum);		return;	}	min_recover = 3 * ncoeff / 2;	if ((impending_size > target) AND	    (impending_size - target > min_recover)) {		min_recover = impending_size - target;	}	/* Sort candidate rows by cost... */	sort_gc_candidates (cnum, cost, count);	/* Find most-costly rows to delete that will achieve	*/	/* the target pool size.				*/	minrow = pool -> nrows;	nz = 0;	i = count - 1;	for (;;) {		k = cnum [i];		nz += pool -> rows [k].len;		if (k < minrow) {			minrow = k;		}		if (nz >= min_recover) break;		if (i EQ 0) break;		--i;	}	/* We are deleting this many non-zeros from the pool... */	pool -> num_nz -= nz;	/* We want to delete constraints numbered cnum [i..count-1]. */	delflags = NEWA (pool -> nrows, bool);	memset (delflags, 0, pool -> nrows);	for (; i < count; i++) {		delflags [cnum [i]] = TRUE;	}	/* Compute a map for renumbering the constraints that remain.	*/	renum = NEWA (pool -> nrows, int);	j = 0;	for (i = 0; i < pool -> nrows; i++) {		if (delflags [i]) {			renum [i] = -1;		}		else {			renum [i] = j++;		}	}	/* Renumber the constraint numbers of the LP tableaux... */	for (i = 0; i < pool -> nlprows; i++) {		j = renum [pool -> lprows [i]];		if (j < 0) {			fatal ("garbage_collect_pool: Bug 1.");		}		pool -> lprows [i] = j;	}	/* Renumber all of the hash table linked lists.  Unthread all	*/	/* entries that are being deleted.				*/	for (i = 0; i < CPOOL_HASH_SIZE; i++) {		ihookp = &(pool -> hash [i]);		while ((j = *ihookp) >= 0) {			rcp = &(pool -> rows [j]);			k = renum [j];			if (k < 0) {				*ihookp = rcp -> next;			}			else {				*ihookp = k;				ihookp = &(rcp -> next);			}		}	}	/* Delete proper row headers... */	j = minrow;	for (i = minrow; i < pool -> nrows; i++) {		if (delflags [i]) continue;		pool -> rows [j] = pool -> rows [i];		++j;	}	pool -> nrows = j;	/* Temporarily reverse the order of the coefficient blocks... */	blkp = reverse_rblks (pool -> blocks);	pool -> blocks = blkp;	/* Now compact the coefficient rows... */	p1 = blkp -> base;	p2 = blkp -> ptr + blkp -> nfree;	for (i = 0; i < pool -> nrows; i++) {		rcp = &(pool -> rows [i]);		p3 = rcp -> coefs;		j = rcp -> len + 1;		if (p1 + j > p2) {			/* Not enough room in current block -- on to next. */			blkp -> ptr = p1;			blkp -> nfree = p2 - p1;			blkp = blkp -> next;			p1 = blkp -> base;			p2 = blkp -> ptr + blkp -> nfree;		}		if (p3 NE p1) {			rcp -> coefs = p1;			memcpy (p1, p3, j * sizeof (*p1));		}		p1 += j;	}	blkp -> ptr = p1;	blkp -> nfree = p2 - p1;	/* Free up any blocks that are now unused.  Note: this	*/	/* code assumes the first rblk survives!		*/	tmp1 = blkp -> next;	blkp -> next = NULL;	while (tmp1 NE NULL) {		tmp2 = tmp1 -> next;		tmp1 -> next = NULL;		free ((char *) (tmp1 -> base));		free ((char *) tmp1);		tmp1 = tmp2;	}	/* Re-reverse list of coefficient blocks so that the one we are	*/	/* allocating from is first					*/	pool -> blocks = reverse_rblks (pool -> blocks);	free ((char *) renum);	free ((char *) delflags);	free ((char *) cost);	free ((char *) cnum);	print_pool_memory_usage (pool);	tracef (" %% Leaving garbage_collect_pool\n");}/* * This routine sorts the candidate rows in order by cost (of retaining * them). */	static	voidsort_gc_candidates (int *		cnum,		/* IN - constraint numbers within pool */int32u *	cost,		/* IN - constraint costs (mem*time products) */int		n		/* IN - number of candidates */){int		i;int		j;int		h;int		tmp_cnum;int32u		tmp_cost;	for (h = 1; h <= n; h = 3*h+1) {	}	do {		h = h / 3;		for (i = h; i < n; i++) {			tmp_cnum = cnum [i];			tmp_cost = cost [i];			for (j = i; j >= h; j -= h) {				if (cost [j - h] <= tmp_cost) break;				cnum [j] = cnum [j - h];				cost [j] = cost [j - h];			}			cnum [j] = tmp_cnum;			cost [j] = tmp_cost;		}	} while (h > 1);}/* * This routine reverses a list of rblk structures. */	static	struct rblk *reverse_rblks (struct rblk *		p		/* IN - list of rblk structures */){struct rblk *		r;struct rblk *		tmp;	r = NULL;	while (p NE NULL) {		tmp = p -> next;		p -> next = r;		r = p;		p = tmp;	}	return (r);}/* * This routine deletes all rows from the LP that are currently slack. * Note that these constraints remain in the pool.  This is purely an * efficiency hack designed to limit the number of rows that the LP * solver has to contend with at 

⌨️ 快捷键说明

复制代码 Ctrl + C
搜索代码 Ctrl + F
全屏模式 F11
切换主题 Ctrl + Shift + D
显示快捷键 ?
增大字号 Ctrl + =
减小字号 Ctrl + -