Custom clustering code for making groups of related terms

Write custom clustering code for making groups if related terms.


The source information will be a set of N-dimensional vectors, where N is a set of words that often appear in the same paragraphs as other words. The input are topics generated from a proprietary corpus using latent Dirichlet allocation (LDA). We currently have a dozen vectors (each vector is a topic from LDA), and N ~= 300. We have a simple file format delimitated with newlines, "|" and ";".


Code should be in a compiled language, such as Fortran or C. You will probably use Group Average Agglomerative Clusterer. We used python NLTK as a proof of concept, and we had preliminary success. You can see our simple python. There will be additional weighting information, as we have additional data about the weights of some of the other N words between eachother. The algorithm is intended to have the degree of clustering depend on the initial similarity of the clusters.

There will be 5, tightly related tasks:

1) Write compiled code for merging our source vectors.

The result will be analogous to our python NLTK sample.

2) Add weighting information we provide. (We have weighting scores for some of the N terms, which will cause any cluster they are in to be more or less important.). Specifically, we have 100 themes. Example themes are "sports" and "food". We know that the word "apple" has a high weight for the "food" theme, and a low score for the "sports" theme. Therefore a cluster containing [apple, THEME:sports] would be weighted lower than a cluster containing [apple, THEME:food].

3) Adjust similarities for a subset M of N terms, so they are less likely to be combined. For example, if M = [orange, apple], then two sets [orange, banana] and [pear, apple] would be considered more distant. (not the subset M is the same as the THEMES in #2). Not all M have different relationships. Some are negative or positive. e.g., food:sports = -1; but computer:science = 0.8. We will provide a list.

4) Add information from an additional set of W vectors. These vectors are sets of terms extracted from Wikipedia. For example, a vector in W would be all the outgoing links from a wikipedia article, with higher weights depending on their closeness to the start of the wikipedia article.

5) Filter to omit stopwords (will be provided), irrelevant parts of speech (tbd), duplicates (i.e., no word should be in >1 final cluster), and low-probability groups (eliminated).

The output will be a list of potentially related terms.

Skills: Algorithm, C Programming, Natural Language, Software Architecture

See more: write programming code, write an article on wikipedia, vectors programming, vectors in c programming, vector in c language, use of algorithm in programming, use of algorithm, the science of programming, Sports Programming, simple words computer programming, simple algorithm example, set algorithm, science algorithm, python computer programming language, python computer programming, programming vector, programming terms, programming degree, programming and computer science, programming algorithm example, probability programming, pear programming, making an algorithm, list of computer programming language, group algorithm programming

About the Employer:
( 83 reviews ) Rockville, United States

Project ID: #4755692

6 freelancers are bidding on average $499 for this job


I'm experienced in efficient algorithm deployment. Let's discuss the best approach on your PMB.

$950 USD in 20 days
(8 Reviews)

Hi, I have experience in Algorithms and Clustering methods. Let me help you. I am ready to start.

$421 USD in 10 days
(15 Reviews)

Hi, I am expert in algorithms. I can do it.

$300 USD in 7 days
(1 Review)

I can provide you this clustering algorithm. Looking forward to work with you..

$300 USD in 3 days
(2 Reviews)


$333 USD in 4 days
(0 Reviews)


$555 USD in 3 days
(0 Reviews)

Hello, I am interested of this project.

$300 USD in 10 days
(0 Reviews)

hi, i can done it for U. please pm to me. thanks

$722 USD in 20 days
(0 Reviews)