ML之kNN:k最近邻kNN算法的简介、应用、经典案例之详细攻略

ML之kNN:k最近邻kNN算法的简介、应用、经典案例之详细攻略kNN算法的简介邻近算法,或者说K最近邻(kNN,k-NearestNeighbor)分类算法是数据挖掘分类技术中最简单的方法之一。所谓K最近邻,就是k个最近的邻居的意思,说的是每个样本都可以用它最接近的k个邻居来代表。kNN算法的核心思想:如果一个样本在特征空间中的k个最相邻的样本中的大多数属于某一个类别,则该样本也属于这个类别,并具有这个类别上样本的特性。该方法在确定分类决策上只依据最邻近的一个或者几个样本的类别来决定待分样本所属的类别。 kNN方法在类别决策时,只与极少量的相邻样本有关。由于kNN方法主要靠周围有限的邻近的样本,而不是靠判别类域的方法来确定所属类别的,因此对于类域的交叉或重叠较多的待分样本集来说,kNN方法较其他方法更为适合。

kNN算法不仅可以用于分类,还可以用于回归。通过找出一个样本的k个最近邻居,将这些邻居的属性的平均值赋给该样本,就可以得到该样本的属性。如下图是kNN算法中,k等于不同值时的算法分类结果。简单来说,kNN可以看成:有那么一堆你已经知道分类的数据,然后当一个新数据进入的时候,就开始跟训练数据里的每个点求距离,然后选择离这个训练数据最近的k个点,看看这几个点属于什么类型,然后用少数服从多数的原则,给新数据归类。1、kNN思路过程1.1、k的意义

1.2、kNN求最近距离案例解释原理—通过实际案例,探究kNN思路过程共有22图片→label属于[0,21],每一个label对应一个长度距离,最后预测encodings中,一张图片中的两个目标

knn_clf.kneighbors())(array([[0.30532235, 0.31116033],[0.32661427, 0.33672689],[0.23773344, 0.32330168],[0.23773344, 0.31498658],[0.33672689, 0.33821827],[0.38318684, 0.40261368],[0.36961207, 0.37032072],[0.30532235, 0.32875857],[0.31116033, 0.31498658],[0.34639613, 0.37008633],[0.34639613, 0.38417308],[0.38043224, 0.40495343],[0.37008633, 0.38417308],[0.36410526, 0.38557585],[0.40495343, 0.42797409],[0.36410526, 0.40118199],[0.31723113, 0.340506  ],[0.37033616, 0.37823567],[0.32446263, 0.33810974],[0.31723113, 0.32446263],[0.33810974, 0.37878755],[0.340506  , 0.3755613 ]]),array([[ 7,  8],[ 0,  4],[ 3,  8],[ 2,  8],[ 1,  3],[ 1,  8],[ 4,  7],[ 0,  8],[ 0,  3],[10, 12],[ 9, 12],[ 9, 14],[ 9, 10],[15,  9],[11, 10],[13, 12],[19, 21],[19, 21],[19, 20],[16, 18],[18, 16],[16, 19]], dtype=int64))knn_clf.kneighbors(encodings, n_neighbors=1)(array([[0.33233257],[0.31491284]]),array([[20],[12]], dtype=int64))2、K 近邻算法的三要素K 近邻算法使用的模型实际上对应于对特征空间的划分。K 值的选择,距离度量和分类决策规则是该算法的三个基本要素:K 值的选择会对算法的结果产生重大影响。K值较小意味着只有与输入实例较近的训练实例才会对预测结果起作用,但容易发生过拟合;如果 K 值较大,优点是可以减少学习的估计误差,但缺点是学习的近似误差增大,这时与输入实例较远的训练实例也会对预测起作用,使预测发生错误。在实际应用中,K 值一般选择一个较小的数值,通常采用交叉验证的方法来选择最优的 K 值。随着训练实例数目趋向于无穷和 K=1 时,误差率不会超过贝叶斯误差率的2倍,如果K也趋向于无穷,则误差率趋向于贝叶斯误差率。该算法中的分类决策规则往往是多数表决,即由输入实例的 K 个最临近的训练实例中的多数类决定输入实例的类别距离度量一般采用 Lp 距离,当p=2时,即为欧氏距离,在度量之前,应该将每个属性的值规范化,这样有助于防止具有较大初始值域的属性比具有较小初始值域的属性的权重过大。k最近邻kNN算法的应用1、kNN代码解读"""Regression based on k-nearest neighbors.The target is predicted by local interpolation of the targets associated of the nearest neighbors in the training set.Read more in the :ref:`User Guide <regression>`.Parameters----------n_neighbors : int, optional (default = 5)Number of neighbors to use by default for :meth:`kneighbors queries.weights : str or callableweight function used in prediction.  Possible values:- 'uniform' : uniform weights.  All points in each neighborhood are weighted equally.- 'distance' : weight points by the inverse of their distance.in this case, closer neighbors of a query point will have a greater influence than neighbors which are further away.- [callable] : a user-defined function which accepts an array of distances, and returns an array of the same shape containing the weights.Uniform weights are used by default.algorithm : {'auto', 'ball_tree', 'kd_tree', 'brute'}, optionalAlgorithm used to compute the nearest neighbors:- 'ball_tree' will use :class:`BallTree`- 'kd_tree' will use :class:`KDTree`- 'brute' will use a brute-force search.- 'auto' will attempt to decide the most appropriate algorithmbased on the values passed to :meth:`fit` method.基于k近邻的回归。通过对训练集中最近邻相关的目标进行局部插值来预测目标。请参阅:ref: ' User Guide <regression> '。</regression>参数---------n_neighbors: int,可选(默认= 5)kneighbors:meth: ' kneighbors查询默认使用的邻居数。权值:str或callable用于预测的权函数。可能的值:-“均匀”:重量均匀。每个邻域中的所有点的权值都是相等的。-“距离”:权重点的距离的倒数。在这种情况下,查询点附近的邻居比远处的邻居有更大的影响。- [callable]:一个用户定义的函数,它接受一个距离数组,并返回一个包含权值的形状相同的数组。默认情况下使用统一的权重。算法:{'auto', 'ball_tree', 'kd_tree', 'brute'},可选计算最近邻的算法:- 'ball_tree'将使用:class: ' BallTree '- 'kd_tree'将使用:class: ' KDTree '-“蛮力”将使用蛮力搜索。- 'auto'将尝试决定最合适的算法基于传递给:meth: ' fit '方法的值。Note: fitting on sparse input will override the setting of this parameter, using brute force.leaf_size : int, optional (default = 30)Leaf size passed to BallTree or KDTree.  This can affect the speed of the construction and query, as well as the memory required to store the tree.  The optimal value depends on the nature of the problem.p : integer, optional (default = 2)Power parameter for the Minkowski metric. When p = 1, this is equivalent to using manhattan_distance (l1), and  euclidean_distance (l2) for p = 2. For arbitrary p, minkowski_distance (l_p) is used.metric : string or callable, default 'minkowski'the distance metric to use for the tree.  The default metric is minkowski, and with p=2 is equivalent to the standard Euclidean metric. See the documentation of the DistanceMetric class for a list of available metrics.metric_params : dict, optional (default = None)Additional keyword arguments for the metric function.n_jobs : int, optional (default = 1)The number of parallel jobs to run for neighbors search.If ``-1``, then the number of jobs is set to the number of CPU  cores.Doesn't affect :meth:`fit` method.注意:拟合稀疏输入将覆盖该参数的设置,使用蛮力。leaf_size: int,可选(默认值为30)叶大小传递给BallTree或KDTree。这可能会影响构造和查询的速度,以及存储树所需的内存。最优值取决于问题的性质。p:整数,可选(默认= 2)Minkowski度规的功率参数。当p = 1时,这相当于在p = 2时使用manhattan_distance (l1)和euclidean_distance (l2)。对于任意p,使用minkowski_distance (l_p)。度量:字符串或可调用,默认'minkowski'用于树的距离度量。默认的度量是minkowski, p=2等于标准的欧几里德度量。有关可用指标的列表,请参阅distancem类的文档。metric_params: dict,可选(默认= None)度量函数的附加关键字参数。n_jobs: int,可选(默认值为1)要为邻居搜索运行的并行作业的数量。如果' ' -1 ' ',则作业的数量被设置为CPU核心的数量。不影响:冰毒:'适合'方法。Examples-------->>> X = [[0], [1], [2], [3]]>>> y = [0, 0, 1, 1]>>> from sklearn.neighbors import KNeighborsRegressor>>> neigh = KNeighborsRegressor(n_neighbors=2)>>> neigh.fit(X, y) # doctest: +ELLIPSISKNeighborsRegressor(...)>>> print(neigh.predict([[1.5]]))[ 0.5]See also--------NearestNeighborsRadiusNeighborsRegressorKNeighborsClassifierRadiusNeighborsClassifier例子-------->>> X = [[0], [1], [2], [3]]>>> y = [0, 0, 1, 1]从sklearn > > >。邻居进口KNeighborsRegressor>>> neigh = KNeighborsRegressor(n_neighbors=2)> > >马嘶声。fit(X, y) # doctest: +省略号KNeighborsRegressor (…)> > >打印(neigh.predict ([[1.5]]))[0.5]另请参阅--------NearestNeighborsRadiusNeighborsRegressorKNeighborsClassifierRadiusNeighborsClassifierNotes-----See :ref:`Nearest Neighbors <neighbors>` in the online  documentation for a discussion of the choice of ``algorithm`` and ``leaf_size``... warning::Regarding the Nearest Neighbors algorithms, if it is found that  two neighbors, neighbor `k+1` and `k`, have identical distances but different labels, the results will depend on the ordering of the training data.https://en.wikipedia.org/wiki/K-nearest_neighbor_algorithm"""笔记-----参见:ref: ' Nearest Neighbors < Neighbors > ' in the online documentation,其中讨论了' '算法' '和' ' leaf_size ' '的选择。. .警告::对于最近邻算法,如果发现相邻的'k+1’和'k’这两个相邻的距离相同,但是标签不同,那么结果将取决于训练数据的排序。https://en.wikipedia.org/wiki/K-nearest_neighbor_algorithm”“”class KNeighborsRegressor Found at: sklearn.neighbors.regressionclass KNeighborsRegressor(NeighborsBase, KNeighborsMixin, SupervisedFloatMixin, RegressorMixin): def __init__(self, n_neighbors=5, weights='uniform', algorithm='auto', leaf_size=30, p=2, metric='minkowski', metric_params=None, n_jobs=1, ** kwargs): self._init_params(n_neighbors=n_neighbors, algorithm=algorithm, leaf_size=leaf_size, metric=metric, p=p, metric_params=metric_params, n_jobs=n_jobs, **kwargs) self.weights = _check_weights(weights) def predict(self, X): """Predict the target for the provided data Parameters ---------- X : array-like, shape (n_query, n_features), or (n_query, n_indexed) if metric == 'precomputed' Test samples. Returns ------- y : array of int, shape = [n_samples] or [n_samples, n_outputs] Target values """ X = check_array(X, accept_sparse='csr') neigh_dist, neigh_ind = self.kneighbors(X) weights = _get_weights(neigh_dist, self.weights) _y = self._y if _y.ndim == 1: _y = _y.reshape((-1, 1)) if weights is None: y_pred = np.mean(_y[neigh_ind], axis=1) else: y_pred = np.empty((X.shape[0], _y.shape[1]), dtype=np. float64) denom = np.sum(weights, axis=1) for j in range(_y.shape[1]): num = np.sum(neigh_indj]_y[ * weights, axis=1) y_pred[:j] = num / denom if self._y.ndim == 1: y_pred = y_pred.ravel() return y_pred k最近邻kNN算法的经典案例1、基础案例ML之kNN:利用kNN算法对莺尾(Iris)数据集进行多分类预测ML之kNN(两种):基于两种kNN(平均回归、加权回归)对Boston(波士顿房价)数据集(506,13+1)进行价格回归预测并对比各自性能CV之kNN:基于ORB提取+kNN检测器、基于SIFT提取+flann检测器的图片相似度可视化

(0)

相关推荐