Presentation is loading. Please wait.

Presentation is loading. Please wait.

Implemented two graph partition algorithms 1. Kernighan/Lin Algorithm Input: $network, a Clair::Network object Produce bi-partition to undirected weighted.

Similar presentations


Presentation on theme: "Implemented two graph partition algorithms 1. Kernighan/Lin Algorithm Input: $network, a Clair::Network object Produce bi-partition to undirected weighted."— Presentation transcript:

1 Implemented two graph partition algorithms 1. Kernighan/Lin Algorithm Input: $network, a Clair::Network object Produce bi-partition to undirected weighted graph (Min-cut)‏ Usage: use Clair::Network::KernighanLin; my $KL = new KernighanLin($net); $graphPartition = $KL->generatePartition(); Output: $graphPartition, a hash object: $graphPartition == { $node1 => 0, $node2 => 1,.... };

2 Implemented two graph partition algorithms 2. Girvan/Newman Algorithm produce hierarchical clustering (multiple partitions) for unweighted, undirected graph Input: $network, a Clair::Network object Usage: use Clair::Network::GirvanNewman; my $GN = new GirvanNewman$net); $graphPartition = $GN->partition(); Output: $graphPartition a hash object: $graphPartiton == { $node1 = 0|1|1|1|1|1|1|1|1|2|2|2|2|2|2|2|2|3|3|3|4|4|4|4|7|7|8|8|8|8|9|9|10|10|10|10|11|11|11|4|4|4|4|5|5|6|6| 6|6|6|7|7|8|8|8|8|8|8|8|8|9|9|9|9|9|9|9|20|21, $node2 = 0|1|1|1|1|1|1|1|1|2|2|2|2|2|2|2|2|1|1|1|2|2|2|2|5|5|6|6|6|6|7|7|8|8|8|8|9|9|9|3|3|3|3|4|4|4|4|4|4|4|5| 5|6|6|6|6|6|6|6|6|7|7|7|7|7|7|7|6|7,.... }

3 Implemented a dependency parser produce non-projective, unlabeled dependency tree Input: the dow fell 22.6 % on black monday. DT NNP VBD CD NN IN JJ NNP. DEP NP-SBJ ROOT DEP NP PP DEP NP DEP 2 3 0 5 3 3 8 6 3 Output: out.txt the dow fell 22.6 % on black monday. DT NNP VBD CD NN IN JJ NNP. 2 3 0 5 3 3 8 6 3 Usage: perl MSTParser.pl $inputData perl MSTParser.pl model $inputData the sentence POS (manually added)‏ Dependency Label (for reference)‏ dependency parent (for reference)‏ dependency parent (parsed result)‏

4 Result Slightly off: substitute 11.11 to

5 Some Observations Most time consuming part: calculating the edge weights. 1. extract features related to the edge John saw NNP VBD NNP VBD NNP 2. Compare features with exist features and their weights John saw0 NNP VBD15 NNP VBD NNP5 3. Add up the weights Weight (John -> saw) = 20

6 Questions: 1. Do we really need a complete graph? 2. Do we really need so many features ? X1APT4=< MID MID:-0.03632653020122016 X1APT2=< MID U:0 X1APT1=< MID U:0 X1APT=< MID MID U:0 XAPT4=< MID MID&RA&0:-0.03632653020122016 XAPT2=< MID U&RA&0:0 XAPT1=< MID U&RA&0:0 XAPT=< MID MID U&RA&0:0 1APT4= MID MID:-0.03632653020122016 1APT2= MID UH:0 1APT1= MID UH:0 1APT= MID MID UH:0 APT4= MID MID&RA&0:-0.03632653020122016 APT2= MID UH&RA&0:0 APT1= MID UH&RA&0:0 APT= MID MID UH&RA&0:0 X1PT4=STR <,:-0.025903170059057394 X1PT2=STR < U:0 X1PT1=< U,:0 X1PT=STR < U,:0 XPT4=STR <,&RA&0:0 XPT3=STR U,&RA&0:0 XPT2=STR < U&RA&0:0 XPT1=< U,&RA&0:0 XPT=STR < U,&RA&0:0 1PT4=STR,:-0.025903170059057394 1PT2=STR UH:0 1PT1= UH,:0 1PT=STR UH,:0 PT4=STR,&RA&0:0 PT3=STR UH,&RA&0:0 PT2=STR UH&RA&0:0 PT1= UH,&RA&0:0 PT=STR UH,&RA&0:0

7 ! ! 3. Why the sentence correction rate is so low? 4. Do we really need “online large margin training”? Maybe just feature frequency?

8 Thank you!


Download ppt "Implemented two graph partition algorithms 1. Kernighan/Lin Algorithm Input: $network, a Clair::Network object Produce bi-partition to undirected weighted."

Similar presentations


Ads by Google