⭐ 欢迎来到虫虫下载站! | 📦 资源下载 📁 资源专辑 ℹ️ 关于我们
⭐ 虫虫下载站

📄 readme,v

📁 Lin-Kernighan heuristic for the TSP and minimum weight perfect matching
💻
📖 第 1 页 / 共 2 页
字号:
head	1.11;access;symbols	zero-five-zero:1.11	zero-four-seventeen:1.9	zero-four-ten:1.9	zero-four-nine:1.8	zero-four-eight:1.8	zero-four-five:1.7	zero-four-three:1.7	zero-four-zero:1.4;locks	neto:1.11; strict;comment	@# @;1.11date	2000.09.17.04.21.15;	author neto;	state Exp;branches;next	1.10;1.10date	99.01.14.19.21.55;	author neto;	state Exp;branches;next	1.9;1.9date	98.08.23.21.42.47;	author neto;	state Exp;branches;next	1.8;1.8date	98.08.14.20.42.03;	author neto;	state Exp;branches;next	1.7;1.7date	98.07.31.18.26.00;	author neto;	state Exp;branches;next	1.6;1.6date	98.07.31.18.01.54;	author neto;	state Exp;branches;next	1.5;1.5date	98.07.31.17.53.46;	author neto;	state Exp;branches;next	1.4;1.4date	98.07.16.23.11.51;	author neto;	state Exp;branches;next	1.3;1.3date	98.07.16.22.04.18;	author neto;	state Exp;branches;next	1.2;1.2date	98.07.16.21.36.23;	author neto;	state Exp;branches;next	1.1;1.1date	97.05.14.19.04.54;	author neto;	state Exp;branches;next	;desc@README file for the LK distribution.@1.11log@Updated for 0.5.0two bugfixes mentionedMention that I finished my PhDMention the DIMACS TSP challenge@text@This is the README for lk-0.5.0See BUGS and ChangeLog for a summary of recent changes, including twobug fixes since the last public release (0.4.17).The home site of this package is http://www.cs.utoronto.ca/~neto/research/lkEnjoy,David Neto, netod@@acm.org, 2000/9/17Contents======== - Introduction - Making the programs - Why a new implementation? - Tools - Algorithms - Performance - Rough bits - Copyrights and Licenses - ReferencesIntroduction============Program LK implements the Lin-Kernighan heuristic for the symmetrictraveling salesman problem (TSP).See the famous 1973 paper by Shen Lin and Brian Kernighan for the firstdescription and motivation of this algorithm.  (References appear below.)  A more modern source is the chapter by Johnson and McGeoch; it describesthe Lin-Kernighan algorithm in the context of other local searchalgorithms for the TSP.This program was written by David Neto for his doctoral work in theDepartment of Computer Science at the University of Toronto.  It mostlyfollows the design outline given by Johnson and McGeoch for theimplementation due to Johnson, Bentley, McGeoch and Rothberg (JBMR),and is tuned for two-dimensional Euclidean instances.  I successfully defended my PhD on May 25, 2000.  The thesistitle is ``Efficient Cluster Compensation for Lin-Kernighan Heuristics''.Find out more by visiting http://www.cs.utoronto.ca/~neto/research/thesis.htmlThe thesis is now available online in PostScript form.This program introduces ``efficient cluster compensation'', analgorithmic technique intended to make the Lin-Kernighan heuristicmore robust in the face of clustered data.Mail suggestions and bug reports for LK to netod@@acm.org.  Please includethe version of `lk', which you can get by running `lk --version'.If pertinent, include a pointer to sample inputs.Programs jitter, shake, tspgen, are TSP instance generators.  These others are the subject of active research.Program unifd generates 2-d instances with uniform distribution over a square.Making the programs===================See the file INSTALL for compilation instructions.  In a nutshell:	./configure	make	(cd script; make myall)	(cd src; ../script/lkdoitcmp)You'll probably want to execute the ../script/lkdoitcmp script whileinside the src directory.  That creates two binaries: 	lk.no_d (baseline Lin-Kernighan without cluster compensation), and	lk.deg (lk with cluster compensation)You can tell the which version is which by executing the binary with--version.  The cluster compensation will have a 'deg' at the end ofits version number, e.g. 	LK 0.4.20degScripts for munging data and running experiments are in the script directory,and the data directory, and in this directory.  Buyer beware.  :)The expt directory contains scripts for script/expt.pl that run a batteryof experiments.   The expt directory also contains a horribly convoluted makefile for processing the output logs of the experiments.Once both binaries for lk are built, try this inside the expt directory:    ../script/expt.pl expttest    make tsp.lin105..i1..cf.plt    make tsp.lin105..i1..probe.plt    make tsp.lin105..i1..move.plt    # Shows run by run comparison with and without cluster compensation.    gnuplot tsp.lin105..i1..cf.plt       # Show probe depths of lambda-changes (how long does the t list grow,     #	and how often    gnuplot tsp.lin105..i1..probe.plt       # Show move depths of improving lambda-changes (i.e. that we commit to)    gnuplot tsp.lin105..i1..move.plt   If you want to try a more extensive set of experiments, try expttsp.script/TSP.pm is a start at a Perl module that should mostly encapsulatemunging of TSPLIB files.  For example, see script/tsplib2rothberg.pl.Why a new implementation?=========================1. "You never really know something until you teach it to a computer."	-- Donald Knuth (paraphrase)2.  When I started writing this program, there were no freely availableimplementations of the Lin-Kernighan heuristic.3.  I wanted to instrument the program so I could learn more about thebehaviour of the algorithm.  Writing it myself seemed a natural wayto know where and what to instrument.4.  I originally wanted to run the code in parallel.  Also, I chose theperformance-oriented implementation language C.  Both factors meant Ineeded to architect the code in a particular way.  (Now I don't careabout parallelizing the code.)5.  I've written (most of) the code in a "literate style".  I've used theCWEB suite of literate programming tools.  I'm convinced literateprogramming is the way to go for experimental and expositoryprogramming.6.  Free software is a good thing.  By "free" I mean free toredistribute and modify.  (See http://www.debian.org/intro/free.) I wanted to be able to distribute the code under the GNU GPL or the GNULGPL, so I needed to write a clean-room implementation.I do not resent other people for not releasing their code.  I have thegreatest respect for other researchers in this field (see thereferences).  They have already given a great deal in describing theirown work.Let me clarify this stance with two quotes:"My opinion on licenses is that `he who writes the code gets to choosethe license, and nobody else gets to complain.' Anybody complainingabout a copyright license is a whiner." -- Linus Torvalds."Implementation is the sincerest form of flattery." -- L. Peter Deutsch.Note:I just found out about the Applegate, Bixby, Chvatal, Cook source coderelase.  I'm eager to try out their code.(See http://www.caam.rice.edu/~keck/concorde.html)But before I do, I'm releasing my own code, to establish the "cleanliness"of my code.Tools=====If you want to try the programs in this package, you will need a C compilerand make.  The only non-standard C the program requires is the BSDresource usage functions (getrusage).If you want to munge the output and run experiments automatically,you will need Perl and GNU make (for expt/Makefile).  If you wantto plot the munged output of experiments, you will need gnuplot.If you want to read the code, you will need CWEB and TeX to processthe .w files into .dvi files.If you want to develop the code, you will need CWEB (and make and a Ccompiler).  Edit the .w files, not the .c files!  I also stronglysuggest using RCS.  You'll probably also want GNU autoconf and GNUautomake.  Remaking dependencies can be avoided by first usingautomake --include-deps in the project root directory.Algorithms==========For the Lin-Kernighan heuristic for the TSP, I worked from Lin andKernighan's original paper and from the Johnson and McGeoch chapter.(See src/jbmr.w)Jon Bentley and Doug McIlroy developed the QuickSort variant coded asdsort() in file src/dsort.w.  See their article "Engineering a SortFunction".  I used it here because it is deterministic: I needrepeatability for my experiments.  Apparently, the Solaris qsortfunction isn't always deterministic.  (See src/dsort.w)Kd-trees for semi-dynamic point sets are Jon Bentley's creation.I've implemented 2-d trees only.  3-d and beyond shouldn't be hard.Of the queries, I've implemented only the nearest-neighbour queries,not the fixed-radius searches.  (See src/kdtree.w)The TSP code uses the oriented tour ADT as described by Fredman et.al.I've implemented the ADT in terms of arrays and two-level trees.(See src/array.w, src/twolevel.w)I've also implemented a crude version of Held-Karp lower bounds forthe TSP.  (See src/ascend.w)Program LK can also be used for approximately solving the minimumweight perfect matching problem.  The details are simpler than forthe TSP.  I don't know of anyone else applying Lin-Kernighan for weighted perfect matching, although Bentley used 2-opt for thisproblem.  (See his paper on fast geometric algorithms for the TSP.He used a 2-opt matching algorithm as a component in his approximateChristofides algorithm.)(See src/match.w: it plays the role for matching that jbmr.w does for the TSP)I invented cluster compensation, and I think it's kind of neat.(See src/decluster.w)Fast least common ancestors in binary trees is implemented in decluster.w.I use the algorithm described by Schieber.The chaos game for iterated function systems played in ifs.w is describedin Michael Barnsley's _Fractals Everywhere_.Performance===========Program lk is a high quality implementation of the Lin-Kernighanheuristic.  However, I wouldn't call it "state of the art".  What do I mean?I consider the JBMR implementation to be the state of the artimplementation of the Lin-Kernighan heuristic for the TSP.  I useit as the standard of comparison.  In the latter stages of my doctoralresearch another group led by Bill Cook publicized their own extremelyscalable implementation of the Lin-Kernighan heuristic.(As I write, there is a DIMACS implementation challenge for the TSP,see http://www.research.att.com/~dsj/chtsp/)My implementation is "high quality" because it has similar algorithmicbehaviour as the JBMR implementation.  For example, the average depthof the searches is about 3 or 4 beyond the backtracking phase, i.e. toabout t_12 or t_14.  (If you know Lin-Kernighan, you'll know what thismeans).  It routinely gets tours that are within 2% of optimal (orabove the Held-Karp lower bound).  It can be run on million-cityuniform geometric instances in reasonable time and space.However, my implementation is slower than the JBMR implementation.In tests I ran a while back, my implementation was about twice asslow as the JBMR implementation.  For example, in about 2 CPU hours itcan find near-optimal tours for million-city instances.  (One SGIChallenge 150MHz MIPS R4400 processor; using about 250MB of RAM; amillion cities drawn randomly from a uniform distribution over the unitsquare; distance between cities is the Euclidean distance; within 2% ofoptimal, or 2.5% above the expected Held-Karp bound.)Rough bits==========LK is not finished.  There are a number of things that can be improved.This code is not yet suitable for inclusion in a library, as there area number of global structures that would have to be encapsulated.  Itwill take some cleaning up and some rearchitecting before thishappens.  That's not a high priority for me right now.  I'd ratherfinish my thesis.There are a number of experimental coding techniques I tried for thefirst time with this implementation.  Aside from scripts, all thecoding was done in CWEB.  That was just great from a developmentalpoint of view as the code almost entirely self-documenting.  However, Itook great stylistic liberties in coding tricks and techniques affordedby CWEB's section-oriented programming model.  (As opposed to C'sfunction-oriented programming model.)  For instance, module jbmr.w isover 4000 lines of CWEB, but it is almost entirely the implementationof a single function, jbmr_run.  That module also extensively usesmultiple section nesting together with preprocessor tricks to weavecode together.  After cweave'ing, the resulting jbmr.c file is over45000 lines long, with much of it effectively #ifdef'd out.  But stillit makes for a very large C function.  It's as if all the code wereinlined together.  I'm relying on the optimizer in the C compiler to doa good job.  The code speed will likely be reduced on register-poorarchitectures, and on machines with small instruction caches.My initial intention was to run Lin-Kernighan in parallel, so I chose Cover C++ so that I could be assured of greater control over memoryallocation.  That's also why jbmr.w is coded as one big function:  allthe local variables are available with known local addresses, and theywouldn't conflict with other processes running the same procedure. As aside effect, this design saves indirection in accesses to localvariables.  If I knew I'd be running this code only in sequential mode,then I might have used C++ instead.Some distance functions are not yet implemented.  For example, I haven'tbeen able to validate my implementation of the GEO distance metricfrom TSPLIB. (Fixed in lk-0.4.17/src/read.w.  We follow Concorde behaviour.)Also, the kd-trees handle only 2-d instances.  3-d and beyond would notbe hard to implement.Hash tables can be used for the tabu list and check in jbmr.wand match.w.  But the distributions of tabu list lengths are soskewed that you might want to stick to the plain linear time tabu check.You choose at compile time between the two.Yes, I know splay trees can be slow.  But I implemented my dictionariesin terms of splay trees because I wanted to remind myself of thedetails of the splay tree algorithms in preparation for implementingthe oriented tours based on splay trees.  I still haven't done thatimplementation.  Also, early on I had some ideas about splay treesrelating to parallelism for Lin-Kernighan, but that has fallen by thewayside in the meantime.Also, I often write "declustering" when I mean "cluster compensation".The latter is a better description of what is going on.Yes, I know C++.  Perhaps not as well as I should.C++ and its standard library was not finalized when I started this code.  C++ environments were not complete or entirely compatible when I started thiscode.  A C++ version of this code seems like a natural step.Copyrights and Licences=======================David Neto is the author of LK and holds its copyright.  However, LK isfree software.   Since I intend LK to evolve into a library, I amlicensing it under the terms of the GNU Library General PublicLicense.  See the file COPYING.LIB in this directory.In this distribution, the "preferred form of the work for makingmodifications to it" is the .w file, not the .c and .h files.  The .cand .h files are generated from the .w file by cweave (a CWEB tool).The only exception is src/lkconfig.h, which is not generated froma .w file.In the interest of supporting forensic software engineering, I stronglysuggest that you also distribute the RCS files as I have.  At somepoint in the future, this will likely become impractical, so I have notincluded the RCS file as part of the "prefered editing form" of asource file.  However, any source file derived from the RCS file is 

⌨️ 快捷键说明

复制代码 Ctrl + C
搜索代码 Ctrl + F
全屏模式 F11
切换主题 Ctrl + Shift + D
显示快捷键 ?
增大字号 Ctrl + =
减小字号 Ctrl + -