relevanceai.operations.cluster.sub#

Module Contents#

class relevanceai.operations.cluster.sub.SubClusterOps(credentials, alias: str, dataset, model, vector_fields: List[str], parent_field: str, outlier_value=- 1, outlier_label='outlier', verbose: bool = True, **kwargs)#

This class is an intermediate layer between cluster ops and subclusterops. It it used to over-write parts of clusterops.

fit_predict(self, dataset, vector_fields: List[Any], parent_field: str = None, filters: Optional[List] = None, verbose: bool = False, min_parent_cluster_size: Optional[int] = None, cluster_ids: Optional[list] = None)#

Run subclustering on your dataset using an in-memory clustering algorithm.

Parameters
  • dataset (Dataset) – The dataset to create

  • vector_fields (List) – The list of vector fields to run fitting, prediction and updating on

  • filters (Optional[List]) – The list of filters to run clustering on

  • verbose (bool) – If True, this should be verbose

Example

from relevanceai import Client
client = Client()

from relevanceai.package_utils.datasets import mock_documents
ds = client.Dataset("sample")

# Creates 100 sample documents
documents = mock_documents(100)
ds.upsert_documents(documents)

from sklearn.cluster import KMeans
model = KMeans(n_clusters=10)
clusterer = ClusterOps(alias="minibatchkmeans-10", model=model)
clusterer.subcluster_predict_update(
    dataset=ds,
)
store_subcluster_metadata(self, parent_field: str, cluster_field: str)#

Store subcluster metadata

subpartialfit_predict_update(self, dataset, vector_fields: list, filters: Optional[list] = None, cluster_ids: Optional[list] = None, verbose: bool = True)#

Run partial fit subclustering on your dataset.

Parameters
  • dataset (Dataset) – The dataset to call fit predict update on

  • vector_fields (list) – The list of vector fields

  • filters (list) – The list of filters

Example

from relevanceai import Client
client = Client()

from relevanceai.package_utils.datasets import mock_documents
ds = client.Dataset("sample")
# Creates 100 sample documents
documents = mock_documents(100)
ds.upsert_documents(documents)

from sklearn.cluster import MiniBatchKMeans
model = MiniBatchKMeans(n_clusters=10)
clusterer = ClusterOps(alias="minibatchkmeans-10", model=model)
clusterer.subpartialfit_predict_update(
    dataset=ds,
)
list_unique(self, field: str = None, minimum_amount: int = 3, dataset_id: str = None, num_clusters: int = 1000)#

List unique cluster IDS

Example

from relevanceai import Client
client = Client()
cluster_ops = client.ClusterOps(
    alias="kmeans_8", vector_fields=["sample_vector_]
)
cluster_ops.list_unique()
Parameters
  • alias (str) – The alias to use for clustering

  • minimum_cluster_size (int) – The minimum size of the clusters

  • dataset_id (str) – The dataset ID

  • num_clusters (int) – The number of clusters

subcluster_predict_documents(self, vector_fields: Optional[List] = None, filters: Optional[List] = None, min_parent_cluster_size: Optional[int] = None, cluster_ids: Optional[List] = None, verbose: bool = True)#

Subclustering using fit predict update. This will loop through all of the different clusters and then run subclustering on them. For this, you need to

Example

from relevanceai import Client
client = Client()
ds = client.Dataset("sample")

# Creating 100 sample documents
from relevanceai.package_utils.datasets import mock_documents
documents = mock_documents(100)
ds.upsert_documents(documents)

# Run simple clustering first
ds.auto_cluster("kmeans-3", vector_fields=["sample_1_vector_"])

# Start KMeans
from sklearn.cluster import KMeans
model = KMeans(n_clusters=20)

# Run subclustering.
cluster_ops = client.ClusterOps(
    alias="subclusteringkmeans",
    model=model,
    parent_alias="kmeans-3")