Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Linux - Cannot copy liblucene++.pc during "make install". Permissions error. #147

Open
roschler opened this issue Aug 11, 2020 · 16 comments

Comments

@roschler
Copy link

[robert@robert-desktop build]$ make install
[ 53%] Built target lucene++
[ 67%] Built target lucene++-contrib
[ 67%] Built target deletefiles
[ 67%] Built target indexfiles
[ 68%] Built target searchfiles
[ 68%] Built target gtest
[100%] Built target lucene++-tester
[100%] Built target gtest_main
Install the project...
-- Install configuration: "Release"
-- Installing: /usr/local/lib/pkgconfig/liblucene++.pc
CMake Error at cmake_install.cmake:44 (file):
file INSTALL cannot copy file
"/home/user/Documents/GitHub/ME/LucenePlusPlus/build/liblucene++.pc" to
"/usr/local/lib/pkgconfig/liblucene++.pc".

/home/robert/Documents/GitHub/ME/LucenePlusPlus/build/liblucene++.pc
/home/robert/Documents/GitHub/ME/LucenePlusPlus/build/liblucene++.pc

=================

I'm building on a Ubuntu 14.04 Linux box. I've installed the Boost libraries. The main make operation executes without error. However, when I execute "make install", I get the following messages with an error at the end:

make install
[ 53%] Built target lucene++
[ 67%] Built target lucene++-contrib
[ 67%] Built target deletefiles
[ 67%] Built target indexfiles
[ 68%] Built target searchfiles
[ 68%] Built target gtest
[100%] Built target lucene++-tester
[100%] Built target gtest_main
Install the project...
-- Install configuration: "Release"
-- Installing: /usr/local/lib/pkgconfig/liblucene++.pc
CMake Error at cmake_install.cmake:44 (file):
file INSTALL cannot copy file
"/home/user/Documents/GitHub/ME/LucenePlusPlus/build/liblucene++.pc" to
"/usr/local/lib/pkgconfig/liblucene++.pc".

I checked and the file "liblucene++.pc" definitely exists in the directory listed above. The destination directory exists too. However, it is owned by "root" and so are the files found in that directory (/usr/local/lib/pkgconfig/). To get around this I just "sudo" copied "liblucene++.pc" into that destination directory by hand and changed the ownership of that file to the current user. However, obviously I still can't complete "make install" successfully. Are there any other operations that are supposed to happen after that file copy operation that I will need to complete myself? What are there? Or is there another way to solve this.

Also, what is the best sample for me to look at to see how to integrate the code with an existing C++ project?

@p01arst0rm
Copy link
Contributor

you need sudo privs to install it. run sudo make install instead of just make install. also, make sure youre installing to the prefix /usr or things can not work right

@roschler
Copy link
Author

I just tried that ("sudo make install"). Same error. I did an "updatedb" and there simply is no "requirements.txt" file in the directory tree that expanded the s2v_reddit_2015_md.tar.gz file into using "tar -xvzf".

@p01arst0rm
Copy link
Contributor

I just tried that ("sudo make install"). Same error. I did an "updatedb" and there simply is no "requirements.txt" file in the directory tree that expanded the s2v_reddit_2015_md.tar.gz file into using "tar -xvzf".

not sure i follow. the error listed above:

install the project...
-- Install configuration: "Release"
-- Installing: /usr/local/lib/pkgconfig/liblucene++.pc
CMake Error at cmake_install.cmake:44 (file):
file INSTALL cannot copy file
"/home/user/Documents/GitHub/ME/LucenePlusPlus/build/liblucene++.pc" to
"/usr/local/lib/pkgconfig/liblucene++.pc".

its trying to copy the file liblucene++.pc to /usr/local/lib/pkgconfig, which is
a write protected location. assuming you cloned repo

mkdir build
cd build
cmake ..
make
sudo make install

should install lucene just fine

@roschler
Copy link
Author

Sorry, I thought I had included the latest error messages when I see that I didn't. I get this error dump now:

_ERROR: Could not open requirements file: [Errno 2] No such file or directory: 'requirements.txt'
/usr/local/lib/python2.7/dist-packages/pip/vendor/urllib3/util/ssl.py:380: SNIMissingWarning: An HTTPS request has been made, but the SNI (Server Name Indication) extension to TLS is not available on this platform. This may cause the server to present an incorrect TLS certificate, which can cause validation failures. You can upgrade to a newer version of Python to solve this. For more information, see https://urllib3.readthedocs.io/en/latest/advanced-usage.html#ssl-warnings
SNIMissingWarning,

/usr/local/lib/python2.7/dist-packages/pip/vendor/urllib3/util/ssl.py:139: InsecurePlatformWarning: A true SSLContext object is not available. This prevents urllib3 from configuring SSL appropriately and may cause certain SSL connections to fail. You can upgrade to a newer version of Python to solve this. For more information, see https://urllib3.readthedocs.io/en/latest/advanced-usage.html#ssl-warnings

InsecurePlatformWarning,_

@p01arst0rm
Copy link
Contributor

Sorry, I thought I had included the latest error messages when I see that I didn't. I get this error dump now:

_ERROR: Could not open requirements file: [Errno 2] No such file or directory: 'requirements.txt'
/usr/local/lib/python2.7/dist-packages/pip/vendor/urllib3/util/ssl.py:380: SNIMissingWarning: An HTTPS request has been made, but the SNI (Server Name Indication) extension to TLS is not available on this platform. This may cause the server to present an incorrect TLS certificate, which can cause validation failures. You can upgrade to a newer version of Python to solve this. For more information, see https://urllib3.readthedocs.io/en/latest/advanced-usage.html#ssl-warnings
SNIMissingWarning,

/usr/local/lib/python2.7/dist-packages/pip/vendor/urllib3/util/ssl.py:139: InsecurePlatformWarning: A true SSLContext object is not available. This prevents urllib3 from configuring SSL appropriately and may cause certain SSL connections to fail. You can upgrade to a newer version of Python to solve this. For more information, see https://urllib3.readthedocs.io/en/latest/advanced-usage.html#ssl-warnings

InsecurePlatformWarning,_

could you run the following commands?

which cmake
which make

@roschler
Copy link
Author

Sure:

/usr/bin/make
/usr/bin/cmake

@p01arst0rm
Copy link
Contributor

Sure:

/usr/bin/make
/usr/bin/cmake

im not sure where python is coming from??? this is a c++ library it has nothing to do with python

@p01arst0rm
Copy link
Contributor

are you building lucene with tests?

@roschler
Copy link
Author

roschler commented Aug 13, 2020 via email

@p01arst0rm
Copy link
Contributor

I don't know. I just followed the instructions in the readme.

On Wed, Aug 12, 2020 at 10:33 PM polar @.***> wrote: are you building lucene with tests? — You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub <#147 (comment)>, or unsubscribe https://github.com/notifications/unsubscribe-auth/ABDR2B3QXGHFNJRUMPUXLKLSANGGLANCNFSM4P3ITMNQ .
-- Thanks, Robert Oschler Twitter: https://twitter.com/roschler http://twitter.com/roschler LinkedIn: https://www.linkedin.com/in/natlang/

run the following then try again

sudo apt-get install libgtest-dev

@roschler
Copy link
Author

That did it! Thanks so much for your help. Now I get to work with it.

Would you happen to know if LucenePlusPlus has the FuzzyQuery speedup in version 4.0 of the Java version?

http://blog.mikemccandless.com/2011/03/lucenes-fuzzyquery-is-100-times-faster.html

Also, any tips or caveats about integrating LucenePlusPlus into an existing C++ app? I'm going to be putting it directly into an existing C++ app and not calling it externally. At least that's my intention, assuming it's architecturally feasible.

@p01arst0rm
Copy link
Contributor

That did it! Thanks so much for your help. Now I get to work with it.

Would you happen to know if LucenePlusPlus has the FuzzyQuery speedup in version 4.0 of the Java version?

http://blog.mikemccandless.com/2011/03/lucenes-fuzzyquery-is-100-times-faster.html

Also, any tips or caveats about integrating LucenePlusPlus into an existing C++ app? I'm going to be putting it directly into an existing C++ app and not calling it externally. At least that's my intention, assuming it's architecturally feasible.

i think the readme has a book, that ones pretty good

@roschler
Copy link
Author

Thanks. And does LucenePlusPlus have the 100 times FuzzyQuery speedup that is in version 4.0 of the Java version?

@alanw
Copy link
Collaborator

alanw commented Aug 14, 2020

I don't think there have been changes to FuzzyQuery for a while so I doubt that speed up has made it into Lucene++.

If there's a (relatively) simple diff in the java version we can use as a reference to port it then that would be a good start.

@roschler
Copy link
Author

@alanw The speedup was added to the Java version in 2011, does that help at all?

As far as a dif is concerned, below are are the FuzzyQuery.java contents from first, version 3.6.2 the version just before the 100X speedup, and below from version 8.6.0, the latest version and thus after the speedup. There are some dependencies to these two files, especially the 8.6.0 version. But from my brief look at them it didn't seem too bad. Whether it is worth it to you and others to implement the speedup is up to you. But I can tell you, having used the Java version and having executed many Fuzzy queries, the response speed of the engine is nothing short of breathtaking, especially for such a powerful feature as that of fuzzy matching against gigabytes of text in mere milliseconds:

FuzzyQuery.java - 3.6.3 - https://archive.apache.org/dist/lucene/java/3.6.2/

package org.apache.lucene.search;

/**
 * Licensed to the Apache Software Foundation (ASF) under one or more
 * contributor license agreements.  See the NOTICE file distributed with
 * this work for additional information regarding copyright ownership.
 * The ASF licenses this file to You under the Apache License, Version 2.0
 * (the "License"); you may not use this file except in compliance with
 * the License.  You may obtain a copy of the License at
 *
 *     http://www.apache.org/licenses/LICENSE-2.0
 *
 * Unless required by applicable law or agreed to in writing, software
 * distributed under the License is distributed on an "AS IS" BASIS,
 * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 * See the License for the specific language governing permissions and
 * limitations under the License.
 */

import org.apache.lucene.index.IndexReader;
import org.apache.lucene.index.Term;
import org.apache.lucene.util.ToStringUtils;

import java.io.IOException;

/** Implements the fuzzy search query. The similarity measurement
 * is based on the Levenshtein (edit distance) algorithm.
 * 
 * <p><em>Warning:</em> this query is not very scalable with its default prefix
 * length of 0 - in this case, *every* term will be enumerated and
 * cause an edit score calculation.
 * 
 * <p>This query uses {@link MultiTermQuery.TopTermsScoringBooleanQueryRewrite}
 * as default. So terms will be collected and scored according to their
 * edit distance. Only the top terms are used for building the {@link BooleanQuery}.
 * It is not recommended to change the rewrite mode for fuzzy queries.
 */
public class FuzzyQuery extends MultiTermQuery {
  
  public final static float defaultMinSimilarity = 0.5f;
  public final static int defaultPrefixLength = 0;
  public final static int defaultMaxExpansions = Integer.MAX_VALUE;
  
  private float minimumSimilarity;
  private int prefixLength;
  private boolean termLongEnough = false;
  
  protected Term term;
  
  /**
   * Create a new FuzzyQuery that will match terms with a similarity 
   * of at least <code>minimumSimilarity</code> to <code>term</code>.
   * If a <code>prefixLength</code> &gt; 0 is specified, a common prefix
   * of that length is also required.
   * 
   * @param term the term to search for
   * @param minimumSimilarity a value between 0 and 1 to set the required similarity
   *  between the query term and the matching terms. For example, for a
   *  <code>minimumSimilarity</code> of <code>0.5</code> a term of the same length
   *  as the query term is considered similar to the query term if the edit distance
   *  between both terms is less than <code>length(term)*0.5</code>
   * @param prefixLength length of common (non-fuzzy) prefix
   * @param maxExpansions the maximum number of terms to match. If this number is
   *  greater than {@link BooleanQuery#getMaxClauseCount} when the query is rewritten, 
   *  then the maxClauseCount will be used instead.
   * @throws IllegalArgumentException if minimumSimilarity is &gt;= 1 or &lt; 0
   * or if prefixLength &lt; 0
   */
  public FuzzyQuery(Term term, float minimumSimilarity, int prefixLength,
      int maxExpansions) {
    this.term = term;
    
    if (minimumSimilarity >= 1.0f)
      throw new IllegalArgumentException("minimumSimilarity >= 1");
    else if (minimumSimilarity < 0.0f)
      throw new IllegalArgumentException("minimumSimilarity < 0");
    if (prefixLength < 0)
      throw new IllegalArgumentException("prefixLength < 0");
    if (maxExpansions < 0)
      throw new IllegalArgumentException("maxExpansions < 0");
    
    setRewriteMethod(new MultiTermQuery.TopTermsScoringBooleanQueryRewrite(maxExpansions));
    
    if (term.text().length() > 1.0f / (1.0f - minimumSimilarity)) {
      this.termLongEnough = true;
    }
    
    this.minimumSimilarity = minimumSimilarity;
    this.prefixLength = prefixLength;
  }
  
  /**
   * Calls {@link #FuzzyQuery(Term, float) FuzzyQuery(term, minimumSimilarity, prefixLength, Integer.MAX_VALUE)}.
   */
  public FuzzyQuery(Term term, float minimumSimilarity, int prefixLength) {
    this(term, minimumSimilarity, prefixLength, defaultMaxExpansions);
  }
  
  /**
   * Calls {@link #FuzzyQuery(Term, float) FuzzyQuery(term, minimumSimilarity, 0, Integer.MAX_VALUE)}.
   */
  public FuzzyQuery(Term term, float minimumSimilarity) {
    this(term, minimumSimilarity, defaultPrefixLength, defaultMaxExpansions);
  }

  /**
   * Calls {@link #FuzzyQuery(Term, float) FuzzyQuery(term, 0.5f, 0, Integer.MAX_VALUE)}.
   */
  public FuzzyQuery(Term term) {
    this(term, defaultMinSimilarity, defaultPrefixLength, defaultMaxExpansions);
  }
  
  /**
   * Returns the minimum similarity that is required for this query to match.
   * @return float value between 0.0 and 1.0
   */
  public float getMinSimilarity() {
    return minimumSimilarity;
  }
    
  /**
   * Returns the non-fuzzy prefix length. This is the number of characters at the start
   * of a term that must be identical (not fuzzy) to the query term if the query
   * is to match that term. 
   */
  public int getPrefixLength() {
    return prefixLength;
  }

  @Override
  protected FilteredTermEnum getEnum(IndexReader reader) throws IOException {
    if (!termLongEnough) {  // can only match if it's exact
      return new SingleTermEnum(reader, term);
    }
    return new FuzzyTermEnum(reader, getTerm(), minimumSimilarity, prefixLength);
  }
  
  /**
   * Returns the pattern term.
   */
  public Term getTerm() {
    return term;
  }
    
  @Override
  public String toString(String field) {
    final StringBuilder buffer = new StringBuilder();
    if (!term.field().equals(field)) {
        buffer.append(term.field());
        buffer.append(":");
    }
    buffer.append(term.text());
    buffer.append('~');
    buffer.append(Float.toString(minimumSimilarity));
    buffer.append(ToStringUtils.boost(getBoost()));
    return buffer.toString();
  }
  
  @Override
  public int hashCode() {
    final int prime = 31;
    int result = super.hashCode();
    result = prime * result + Float.floatToIntBits(minimumSimilarity);
    result = prime * result + prefixLength;
    result = prime * result + ((term == null) ? 0 : term.hashCode());
    return result;
  }

  @Override
  public boolean equals(Object obj) {
    if (this == obj)
      return true;
    if (!super.equals(obj))
      return false;
    if (getClass() != obj.getClass())
      return false;
    FuzzyQuery other = (FuzzyQuery) obj;
    if (Float.floatToIntBits(minimumSimilarity) != Float
        .floatToIntBits(other.minimumSimilarity))
      return false;
    if (prefixLength != other.prefixLength)
      return false;
    if (term == null) {
      if (other.term != null)
        return false;
    } else if (!term.equals(other.term))
      return false;
    return true;
  }


}

FuzzyQuery.java from 8.6.0 - https://lucene.apache.org/core/downloads.html

/*
 * Licensed to the Apache Software Foundation (ASF) under one or more
 * contributor license agreements.  See the NOTICE file distributed with
 * this work for additional information regarding copyright ownership.
 * The ASF licenses this file to You under the Apache License, Version 2.0
 * (the "License"); you may not use this file except in compliance with
 * the License.  You may obtain a copy of the License at
 *
 *     http://www.apache.org/licenses/LICENSE-2.0
 *
 * Unless required by applicable law or agreed to in writing, software
 * distributed under the License is distributed on an "AS IS" BASIS,
 * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 * See the License for the specific language governing permissions and
 * limitations under the License.
 */
package org.apache.lucene.search;


import java.io.IOException;
import java.util.Objects;

import org.apache.lucene.index.SingleTermsEnum;
import org.apache.lucene.index.Term;
import org.apache.lucene.index.Terms;
import org.apache.lucene.index.TermsEnum;
import org.apache.lucene.util.AttributeSource;
import org.apache.lucene.util.automaton.CompiledAutomaton;
import org.apache.lucene.util.automaton.LevenshteinAutomata;

/** Implements the fuzzy search query. The similarity measurement
 * is based on the Damerau-Levenshtein (optimal string alignment) algorithm,
 * though you can explicitly choose classic Levenshtein by passing <code>false</code>
 * to the <code>transpositions</code> parameter.
 * 
 * <p>This query uses {@link MultiTermQuery.TopTermsBlendedFreqScoringRewrite}
 * as default. So terms will be collected and scored according to their
 * edit distance. Only the top terms are used for building the {@link BooleanQuery}.
 * It is not recommended to change the rewrite mode for fuzzy queries.
 * 
 * <p>At most, this query will match terms up to 
 * {@value org.apache.lucene.util.automaton.LevenshteinAutomata#MAXIMUM_SUPPORTED_DISTANCE} edits. 
 * Higher distances (especially with transpositions enabled), are generally not useful and 
 * will match a significant amount of the term dictionary. If you really want this, consider
 * using an n-gram indexing technique (such as the SpellChecker in the 
 * <a href="{@docRoot}/../suggest/overview-summary.html">suggest module</a>) instead.
 *
 * <p>NOTE: terms of length 1 or 2 will sometimes not match because of how the scaled
 * distance between two terms is computed.  For a term to match, the edit distance between
 * the terms must be less than the minimum length term (either the input term, or
 * the candidate term).  For example, FuzzyQuery on term "abcd" with maxEdits=2 will
 * not match an indexed term "ab", and FuzzyQuery on term "a" with maxEdits=2 will not
 * match an indexed term "abc".
 */
public class FuzzyQuery extends MultiTermQuery {
  
  public final static int defaultMaxEdits = LevenshteinAutomata.MAXIMUM_SUPPORTED_DISTANCE;
  public final static int defaultPrefixLength = 0;
  public final static int defaultMaxExpansions = 50;
  public final static boolean defaultTranspositions = true;
  
  private final int maxEdits;
  private final int maxExpansions;
  private final boolean transpositions;
  private final int prefixLength;
  private final Term term;
  
  /**
   * Create a new FuzzyQuery that will match terms with an edit distance 
   * of at most <code>maxEdits</code> to <code>term</code>.
   * If a <code>prefixLength</code> &gt; 0 is specified, a common prefix
   * of that length is also required.
   * 
   * @param term the term to search for
   * @param maxEdits must be {@code >= 0} and {@code <=} {@link LevenshteinAutomata#MAXIMUM_SUPPORTED_DISTANCE}.
   * @param prefixLength length of common (non-fuzzy) prefix
   * @param maxExpansions the maximum number of terms to match. If this number is
   *  greater than {@link BooleanQuery#getMaxClauseCount} when the query is rewritten, 
   *  then the maxClauseCount will be used instead.
   * @param transpositions true if transpositions should be treated as a primitive
   *        edit operation. If this is false, comparisons will implement the classic
   *        Levenshtein algorithm.
   */
  public FuzzyQuery(Term term, int maxEdits, int prefixLength, int maxExpansions, boolean transpositions) {
    super(term.field());
    
    if (maxEdits < 0 || maxEdits > LevenshteinAutomata.MAXIMUM_SUPPORTED_DISTANCE) {
      throw new IllegalArgumentException("maxEdits must be between 0 and " + LevenshteinAutomata.MAXIMUM_SUPPORTED_DISTANCE);
    }
    if (prefixLength < 0) {
      throw new IllegalArgumentException("prefixLength cannot be negative.");
    }
    if (maxExpansions <= 0) {
      throw new IllegalArgumentException("maxExpansions must be positive.");
    }
    
    this.term = term;
    this.maxEdits = maxEdits;
    this.prefixLength = prefixLength;
    this.transpositions = transpositions;
    this.maxExpansions = maxExpansions;
    setRewriteMethod(new MultiTermQuery.TopTermsBlendedFreqScoringRewrite(maxExpansions));
  }
  
  /**
   * Calls {@link #FuzzyQuery(Term, int, int, int, boolean) 
   * FuzzyQuery(term, maxEdits, prefixLength, defaultMaxExpansions, defaultTranspositions)}.
   */
  public FuzzyQuery(Term term, int maxEdits, int prefixLength) {
    this(term, maxEdits, prefixLength, defaultMaxExpansions, defaultTranspositions);
  }
  
  /**
   * Calls {@link #FuzzyQuery(Term, int, int) FuzzyQuery(term, maxEdits, defaultPrefixLength)}.
   */
  public FuzzyQuery(Term term, int maxEdits) {
    this(term, maxEdits, defaultPrefixLength);
  }

  /**
   * Calls {@link #FuzzyQuery(Term, int) FuzzyQuery(term, defaultMaxEdits)}.
   */
  public FuzzyQuery(Term term) {
    this(term, defaultMaxEdits);
  }
  
  /**
   * @return the maximum number of edit distances allowed for this query to match.
   */
  public int getMaxEdits() {
    return maxEdits;
  }
    
  /**
   * Returns the non-fuzzy prefix length. This is the number of characters at the start
   * of a term that must be identical (not fuzzy) to the query term if the query
   * is to match that term. 
   */
  public int getPrefixLength() {
    return prefixLength;
  }
  
  /**
   * Returns true if transpositions should be treated as a primitive edit operation. 
   * If this is false, comparisons will implement the classic Levenshtein algorithm.
   */
  public boolean getTranspositions() {
    return transpositions;
  }

  /**
   * Returns the compiled automata used to match terms
   */
  public CompiledAutomaton getAutomata() {
    FuzzyAutomatonBuilder builder = new FuzzyAutomatonBuilder(term.text(), maxEdits, prefixLength, transpositions);
    return builder.buildMaxEditAutomaton();
  }

  @Override
  public void visit(QueryVisitor visitor) {
    if (visitor.acceptField(field)) {
      if (maxEdits == 0 || prefixLength >= term.text().length()) {
        visitor.consumeTerms(this, term);
      } else {
        visitor.consumeTermsMatching(this, term.field(), () -> getAutomata().runAutomaton);
      }
    }
  }

  @Override
  protected TermsEnum getTermsEnum(Terms terms, AttributeSource atts) throws IOException {
    if (maxEdits == 0 || prefixLength >= term.text().length()) {  // can only match if it's exact
      return new SingleTermsEnum(terms.iterator(), term.bytes());
    }
    return new FuzzyTermsEnum(terms, atts, getTerm(), maxEdits, prefixLength, transpositions);
  }

  /**
   * Returns the pattern term.
   */
  public Term getTerm() {
    return term;
  }
    
  @Override
  public String toString(String field) {
    final StringBuilder buffer = new StringBuilder();
    if (!term.field().equals(field)) {
        buffer.append(term.field());
        buffer.append(":");
    }
    buffer.append(term.text());
    buffer.append('~');
    buffer.append(maxEdits);
    return buffer.toString();
  }

  @Override
  public int hashCode() {
    final int prime = 31;
    int result = super.hashCode();
    result = prime * result + maxEdits;
    result = prime * result + prefixLength;
    result = prime * result + maxExpansions;
    result = prime * result + (transpositions ? 0 : 1);
    result = prime * result + ((term == null) ? 0 : term.hashCode());
    return result;
  }

  @Override
  public boolean equals(Object obj) {
    if (this == obj)
      return true;
    if (!super.equals(obj))
      return false;
    if (getClass() != obj.getClass())
      return false;
    FuzzyQuery other = (FuzzyQuery) obj;
    return Objects.equals(maxEdits, other.maxEdits) && Objects.equals(prefixLength, other.prefixLength)
        && Objects.equals(maxExpansions, other.maxExpansions) && Objects.equals(transpositions, other.transpositions)
        && Objects.equals(term, other.term);
  }
  
  /**
   * @deprecated pass integer edit distances instead.
   */
  @Deprecated
  public final static float defaultMinSimilarity = LevenshteinAutomata.MAXIMUM_SUPPORTED_DISTANCE;

  /**
   * Helper function to convert from deprecated "minimumSimilarity" fractions
   * to raw edit distances.
   * 
   * @param minimumSimilarity scaled similarity
   * @param termLen length (in unicode codepoints) of the term.
   * @return equivalent number of maxEdits
   * @deprecated pass integer edit distances instead.
   */
  @Deprecated
  public static int floatToEdits(float minimumSimilarity, int termLen) {
    if (minimumSimilarity >= 1f) {
      return (int) Math.min(minimumSimilarity, LevenshteinAutomata.MAXIMUM_SUPPORTED_DISTANCE);
    } else if (minimumSimilarity == 0.0f) {
      return 0; // 0 means exact, not infinite # of edits!
    } else {
      return Math.min((int) ((1D-minimumSimilarity) * termLen), 
        LevenshteinAutomata.MAXIMUM_SUPPORTED_DISTANCE);
    }
  }

}

@p01arst0rm
Copy link
Contributor

p01arst0rm commented Sep 28, 2020

@alanw The speedup was added to the Java version in 2011, does that help at all?

As far as a dif is concerned, below are are the FuzzyQuery.java contents from first, version 3.6.2 the version just before the 100X speedup, and below from version 8.6.0, the latest version and thus after the speedup. There are some dependencies to these two files, especially the 8.6.0 version. But from my brief look at them it didn't seem too bad. Whether it is worth it to you and others to implement the speedup is up to you. But I can tell you, having used the Java version and having executed many Fuzzy queries, the response speed of the engine is nothing short of breathtaking, especially for such a powerful feature as that of fuzzy matching against gigabytes of text in mere milliseconds:

can this be opened as a seperate ticket please :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants